Learn how Google discovers, crawls, and serves web pages.
When you sit down at your computer and do a Google search, you’re almost instantly presented with a list of results from all over the web. How does Google find web pages matching your query, and determine the order of search results?
In the simplest terms, you could think of searching the web as looking in a very large book with an impressive index telling you exactly where everything is located. When you perform a Google search, their programs check their index to determine the most relevant search results to be returned (“served”) to you.
The three key processes in delivering search results to you are:
- Crawling – Does Google know about your site? Can we find it?
- Indexing – Can Google index your site?
- Serving – Does the website have good and useful content that is relevant to the user’s search?
Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
Google uses a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
Google doesn’t accept payment to crawl a site more frequently, and they keep the search side of their business separate from their revenue-generating AdWords service.
Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, Google processes information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, Google cannot process the content of some rich media files or dynamic pages.
When a user enters a query, Google computers search the index for matching pages and return the results Google believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.
In order for your site to rank well in search results pages, it’s important to make sure that Google can crawl and index your site correctly. Google’s Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site’s ranking.
Kookaburra Marketing Consulting
Kookaburra Marketing Consulting will optimise your website for the best possible Google Search Results for your products and services. All our work is done in accordance with all Google Guidelines.
Contact us to discuss where you’re at with your website and Website Marketing.