Amazon Best Products with minimum price

Tuesday 26 December 2017

How to create Robot.txt file for SEO?

Robot.txt is a text file webmasters create to instruct web robots most often search engines how to crawl pages on their website. Robot.txt file also known as robots exclusion protocol (REP). It also tells web robots which pages to crawl and not to crawl. The REP also includes directives like meta robots, as well as page - sub directory, or site - wide instructions for how search engines should treat links (such as "follow" or "nofollow").

Let's say a search engine is about to visit a site. Before it visits the target page, it will check the robots.txt for instructions.

The Basic format of robot.txt file looks like:


User – agent: [user-agent name]
Disallow: [URL string not to be crawled]

 








Together, these two lines are considered a complete robots.text file.


User – agent: *
Disallow: /

 








The above code, is the actual skeleton of a robots.txt file.The asterisk after "user-agent" means that the robots.txt file applies to all web robots that visit the site. The slash after "Disallow" tells the robot to not visit any pages on the site.
You all might be wondering why would anyone want to stop web robots from visiting a site. This is where the secret to this SEO hack comes in.You might be having a lot of pages on your site, right? If a search engine crawls your site, it actually crawls all the pages of your website, it will take the search engine bot a while to crawl them, which can also have negative effects on your ranking. That's because Googlebot (Google's search engine bot ) has a crawl budget.

This is how Google explains:

1. Crawl rate limit

Which limits the maximum fetching rate for a given site. The Crawl rate can go up and down based on a couple of factors:

a) Crawl Health: if the site responds really quickly for a while the limit goes up, that is more connections can be used to crawl. If the site slows down or responds to errors, the limit goes down and Googlebot crawls less.

b) Limit set in Search Console: website owners can reduce Googlebot's crawling of their site.

2. Crawl Demand

Even if the crawl rate limit isn't reached, if there's no demand from indexing, there will be low activity from Googlebot. The two factors that play a significant role in determining crawl demand are:

a) Popularity: URLs that are more popular on the Internet tend to be crawled more often to keep them fresher in our index.

b) Staleness: our systems attempt to prevent URLs from becoming stale in the index.

Crawl Budget: The number of URLs Googlebot can and wants to crawl.

Finding your robots.txt file:

If you just want a quick look at your robots.txt file, or want to see for any site all you have to do is type the basic URL of the site into your browser's search bar (e.g. abc.com, example.com etc.). Then add/robots.txt onto the end.

Following things will happen:

1. You'll find a robots.txt file.
2. You'll find an empty file.
3. You'll get a 404 error.



Let's see Few examples of robot.txt in for www.abcxyz.com site.

Robots.txt file URL: www.abcxyz.com/robots.txt


User – agent: *
Disallow: /

 









Using the above syntax, would tell all web crawlers not to crawl any pages on www.abcxyz.com, including the homepage.



User – agent: *
Disallow:

 








Using the above syntax, would tell all web crawlers to crawl all pages on  www.abcxyz.com, including the homepage.


User – agent: Googlebot
Disallow: /abcwyz-subfolder/

 








Using the above syntax, would tell only Google's crawlers not to crawl any pages that contain the URL string www.abcxyz.com/abcxyz-subfolder/.


User – agent: Bingbot
Disallow: /abcwyz-subfolder/blocked-page.html

 








Using the above syntax, would tell only Bing's crawlers to avoid crawling the specific page at the URL string www.abcxyz.com/abcxyz-subfolder/blocked-page.

Technical Phrases:

1. User-agent: The specific web crawler to which you're giving crawl instructions (search engine).
2. Disallow:  The command used to tell a user-agent not to crawl particular URL. Only one "Disallow" line is allowed for each URL.
3. Allow: Only applicable for Googlebot. The command to tell Googlebot it can access a page or sub folder even though its parent page or sub folder may be disallowed.
4. Crawl-delay: How many milliseconds a crawler should wait before loading and crawling page content.
5. Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Only supported by Google, Ask, Bing and Yahoo.

Some points to be noted:

  • A robots.txt file must be placed in a website's top-level directory (web-crawling robots only look for the file in one specific place; the main directory(root domain or homepage).If a user agent visits  www.abcxyz.com/robots.txt and does not find a robots file there, it will assume the site does not have one and proceed with crawling everything on the page.
  • The file must  be named as "robots.txt" as it is a case sensitive (not Robots.txt, robots.TXT, pr anywise)
  • Each sub domain  on a root domain uses separate robots.txt files. This means that both blog.abcxyz.com and abcxyz.com should have their own robots.txt files.
  • The /robots.txt file is a publicly available: just add/robots.txt to the end of any root domain to see that website's directives. This means that anyone can see what pages you do want to crawl or don't want to crawled.   
  • It's generally a best practice to indicate the location of any sitemaps associated with this domain at the bottom of the robots.txt file. Example:


User – agent: *

Allow: /*.htmls

Disallow: /*/data/*

Sitemap: https://www.abcxyz.com/en-gb/sitemap.xml

 














Some common cases that justify Why do we need robots.txt?

  • Preventing duplicate content from appearing in SERPs
  • Keeping entire sections of a website private
  • Keeping internal search engine results pages from showing up on a public SERP
  • Preventing search engines from indexing certain files on your website
  • Specifying the location of sitemap(s)
  • Specifying a crawl delay in order to prevent your servers from being overloaded when crawlers load multiple pieces of content at once
In case, there are no areas on your site to which you want to control user-agent access, you may not need a robots.txt file at all.

How does robots.txt work?

1. Crawling the web to discover content
2. Indexing that content so that it can be served up to searched who are looking for information.

To Crawl sites, search engines follow links to get from one site to another, crawling across many links and website. This crawling is also known as "spidering".

After arriving at a website but before spidering it, the search crawler will look for a robots,txt file. If it finds one the crawler will read that first and then continue through the page. And any case, there is no robots.txt it will proceed the entire website.






That's all from my end...

If you have any queries, feel free to write in comments down below..
Stay tuned for more digital advertising!!

Thank You...









































No comments:

Post a Comment