Googlebot is Google's web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. From Google: Googlebot
Google chache, rale ak kaptire kontni paj ou diferan de yon navigatè. Pandan ke Google la kapab ranpe scripting, li fè sa pa mean that it will always be successful. And just because you test a redirect in your browser and it works, doesn't mean that the Googlebot is properly redirecting that traffic. It took some dialogue between our team and the hosting company before we figured out what they were doing… and key to finding out was using the Jwenn tankou Google zouti nan Webmasters.
The Fetch as Google tool allows you to enter a path within your site, see whether or not Google was able to crawl it, and actually see the crawled content as Google does. For our first client, we were able to show that Google was not reading the script as they would have hoped. For our second client, we were able to utilize a different methodology to redirect the Googlebot.
Si ou wè Ranje Erè andedan Webmasters (nan seksyon Sante), sèvi ak chache kòm Google pou teste redireksyon ou yo epi gade kontni ke Google ap rekipere.