• Google Mayday trying to recover my traffic

    Posted on June 7th, 2010 admin No comments

    On the last post of this site i show that my traffic is going down, thinking i am doing something wrong or thinks like that.

    Had you ever eard about the google mayday? When i wrote my last post i never had eard about it, now i think it is a nightmore to some webmasters.

    This update consists of optimize the results of the from long keywords searches like: “cheap digital cameras in us” and not for searches like “digital cameras”,

    but my main site whois.gwebtools.com receive most part of his visitors for short search querys like “digital cameras” and is still loosing traffic everyday.

    Im trying to understand now how this new update really works and how can i fix my site to recover the traffic i have befor.

    If someone is also having the same experience, share what you know still now, maybe it can be useful.

    good luck to all of us :)


  • Understanting robots.txt in 5 minutes

    Posted on March 1st, 2009 admin No comments

    A lot of people have a lot of doubts about the function of the robots.txt file and the right configuration of this file.
    I will explain in a few words what the real fuction and how to configure the robots.txt, just the basics.

    The robots.txt file always need to be in the root / of the domain, example www.gwebtools.com/robots.txt, its a standard for search engines.

    Good search engines like google, yahoo, live, ask and many others will respect the options that you configure in your robots.txt, bad search engines like exploits crawler will not respect.

    The robots.txt is a simple text file to help the search engines index only relevant content about your website.

    Configuring the robots.txt is very easy, the options are:

    User-agent: In this option you can put the crawler name like Googlebot or * for all crawlers, that means the configurations will be applied only to those crawlers.
    Disallow: With this option you specify wich folders, pages you don´t want the crawler access and index.
    Allow: With this option you specify wich folders, pages the crawler can access and index (by default it index all).
    Sitemap: You can specify the url of your sitemap.

    –  robots.txt example 01 begin —
    User-agent: *
    Disallow: /
    Allow: /list-of-pages.php
    Allow: /contact.php
    — end of robots.txt example 01 —

    Explanation: In this example the rules apply to all crawlers, and just two pages can be indexed list-of-pages.php and contact.php.

    –  robots.txt example 02 begin —
    User-agent: Googlebot
    Allow: /
    Disallow: /downloads
    Allow: /downloads/signup.php
    — end of robots.txt example 02 —

    Explanation: In this example the rules apply just to googlebot all urls can be indexed, except the folder /downloads, but the page /downloads/signup.php can be indexed.

    Easy. Doubts send comments.