Sebo Marketing

How Does Google Rank Websites

Google has been the top search engine for decades now. Its main purpose is to provide the best results as fast as possible to the most people. If you have an online business, then you have  probably thought about how Google ranks websites, and how you can improve your rankings.

There are three main aspects to the method Google uses to rank websites. First, is the index. Google has stored an index, similar to an index in the back of a book, that contains information of all the publicly available websites online. Second, is a searches intent. Google needs to understand what the person performing the search is actually looking for to provide the best result. And third, is the algorithm. The algorithm uses ranking factors to look through the index and find results that best match the intent of the searcher.

Pretty simple, right? In concept it might seem simple, but in practice it’s not simple at all. Otherwise Google wouldn’t be a billion dollar company. It is not easy to develop software that can understand the intent of a phrase beyond it’s simple definition. It’s easy to know if a web page has a specific phrase, but it’s not easy for a software system to determine if that webpage will provide the most relevant information to a specific search. In this article I will briefly explain each of the three aspects Google uses to rank websites.

 

Google’s Web Crawlers and Index

 

index

The first step Google takes to create an index is to send out its web crawlers (also called spiders or search engine bots). These web crawlers start with known websites or websites that have been provided to Google. They visit these pages and download the information onto Google’s servers. They also look for any links to other webpages. If they find a link that points to another webpage, they follow the webpage and index that webpage’s information into Google’s database. Web crawlers continually crawl the internet all the time.

The internet is a huge place, so web crawlers need to prioritize different web pages. Factors that influence what pages the web crawler crawls first are: how many pages link to a page, the more links to a page the more important it is and the more often it needs to be crawled; how often the webpage updated, pages that often update their content need to be searched more to make sure that Google is up to date; and protocols determined by .txt files present on a website.

All the information that the web crawlers gather is stored on Google’s servers for use by Google’s algorithm in determining what pages to display as results when a search happens. This is the first step in providing the best results to anyone searching on Google.

 

Determining Search Intent

Even people can misunderstand what someone is actually asking or saying in a normal conversion, so Google has a lot of work to make sure they are understanding the intent behind a user’s search. For every search, Google seeks to provide the best information the person is looking for, so they have to understand what that person actually wants, not just the meaning of the words they used. An example of this could be a search in America for “football scores”. Here the user is most likely looking for scores about recent NFL games, but there are a lot of websites that provide scores on historical games, scores of football games in other countries, and how scoring in football works.

To you or me, it might be simple to know the actual intent of what someone is searching, but it is much more complex for a program to be able to understand. Over the years, Google has spent a lot of time and money on creating methods to understand what their users are actually looking for. They have taken huge leaps in improving their software’s understanding of synonyms, context, timing (if someone is searching for information that is based on timing like a recent NBA game score), and much more.

It is through this understanding of intent that Google is able to look through their index of pages and find the pages that will best serve the user.

 

Google’s Algorithm

Once Google’s web crawlers have found a page and added that page to Google’s index, Google then organizes those pages and makes note of different signals/factors of that page. Then, when someone conducts a search, Google uses  signals from their indexed pages and finds the pages that they believe will be the best results for the search.

In Google’s own words, their ranking system is designed to:

“…sort through hundreds of billions of webpages in our Search index to find the most relevant, useful results in a fraction of a second, and present them in a way that helps you find what you’re looking for.”

There are key factors that  Google looks at when determining what results will be best for every search. Those are: meaning of your query, relevance of webpages, quality of content, usability of webpages, and context and settings.

 

Meaning of your query

As stated before, the first step in providing the best results is understanding the query. Google builds language models that work to understand what strings of words they should look up in the index in order to provide the right results. Parts of their algorithms are there just to determine this. For example, their freshness algorithm is used to understand if someone is seeking for content that is new. If someone searches for “flights out of slc”, they are looking for information that is as up to date as possible. Obviously the best results will be ones that are as fresh as possible.

 

Relevance of webpages

The next step is for Google to find the relevant data. The most fundamental factor that determines relevance is the searched-for keyword being present on a webpage. If someone is searching for “learn how to write a short story”, a page that doesn’t contain the terms, “write”, “short story”, or “how-to”, isn’t very likely to be helpful. However, if a page has a title “Begin Writing Short Stories: Get the Best Tips for New Writers”, this page is much more likely to have information the searcher is looking for.

Google also uses interaction data that has been anonymized. This data is changed into signals that their systems can take and use to estimate the relevance. Basically, if someone has searched for something similar in the past and then visited a specific page and interacted positively with that page, Google will see that page as being more relevant.

 

Quality of content

A page with the right words doesn’t mean the page has high-quality content. If someone were to search for “information on new male fashion trends”, a page with the title “Men’s 2020 Fashion Trends” that only showed ads for men’s clothing and didn’t provide any actual information on fashion trends wouldn’t be a good result.

Google wants to find pages that show expertise, authoritativeness, and trustworthiness on what was searched. Links to a page is one of the main ways Google uses to determine if a page shows these qualities. If a lot of other established websites link to a certain page, Google views the page as high-quality.

Google also uses a “Search quality evaluation process” to take feedback and improve their ability to determine the quality of a webpage. The algorithm takes the feedback and additional information, such as spammy practices the site might be involved in, to determine if a webpage will be a quality result.

 

Usability of webpages

Google really wants a webpage to be as user friendly as possible (usability) to whoever might visit the site. This may seem similar to the quality of the page, but it’s quite different. The usability refers to how easy it is for all types of users to access and navigate the page. Sometimes a webpage that works great on Chrome won’t work well on Firefox. A website that works well on desktop might have a really bad layout when viewed on a mobile device. These are important factors when Google is considering the rankings of websites.

 

Context and settings

Context and settings come back to the intent of the user. If a user lives in Utah and they search for “NBA game score”, they are most likely looking for the score of the Jazz’ most recent basketball game. Google uses their algorithms to understand this and show the most recent scores.

Another aspect of context is personalization. Google can use your recent search history to determine what types of results you are looking for. If you search for “Utah” and recently searched for “Utah vs BYU”, Google will use that as an indication that you may be searching for information on the University of Utah football team.

 

What Google’s Updates Tell Us About Rankings

 

Now that you better understand the methods Google uses to determine the search ranks, what lessons can we take away from Google’s updates? These updates will show us what Google values and where our efforts should go when trying to improve our website’s rankings.

 

One of Google’s first big updates was Panda. This update, first implemented in 2011, was primarily about reducing the number of low-quality and thin content results. It also sought to reward unique and compelling content. They used human quality raters to help develop the algorithm’s ability to determine content. The questions they asked the quality raters were similar to, “Would you be comfortable giving this site your credit card? Would you be comfortable giving medicine prescribed by this site to your kids?”

 

Penguin, an update implemented in 2012, was an effort by Google to penalize websites that were trying to “game the system”. These websites violated Google’s Webmaster Guidelines by keyword stuffing, unnatural linking, etc…

 

Hummingbird, implemented in 2013, improved Google’s ability to understand complex queries and return the most relevant results. They focused on topics and understanding synonyms within the query, so pages without exact match phrases still have a chance to show in the results.

 

A quality update implemented in 2015 was an effort to demote thin content and content that had clickbaity elements. They also targeted pages with heavy advertising, content farm articles, and mass-produced ‘how-to’ pages.

 

Intrusive interstitials, implemented in 2017, targeted intrusive pop-ups that hurt the user experience, especially those on mobile.

 

BERT, 2019, was one of the biggest updates in the last 5 years. This is another update targeted to helping Google understand search queries. This update was an effort for Google to better understand nuance and context.

 

What can we learn from all these updates? There are two clear takeaways. First, Google cares a lot about search intent. Like I’ve already stated, Google needs to understand what someone is looking for in order to give the best result. There are no direct efforts you can make in regard to Google’s understanding of search intent.

 

The second takeaway is quality. Google really wants to provide the highest quality content that actually provides answers to the search intent of the queries. This is where your efforts can really make a difference.

 

The first thing you need to do is make sure that the content you are providing is something people are actually searching for. If you are writing a blog about how to cut your grass with a pair of scissors, you’re never going to get many visits, no matter how “quality” your content. No one is searching for “best methods to cut grass with scissors”.

 

I do want to make a caveat to this, however. Just because there aren’t a lot of searches for your content doesn’t mean you shouldn’t provide the content. Often the content that is more unique and has less searches is easier to rank for. You’ll likely have less competition, and you have a much better chance of getting to the top of the results page.

 

The second thing you need to make sure you are doing is providing good content. Google’s updates show that they are doing a lot of work at making sure quality content that answers the searcher’s intent is shown. This means you don’t have to have the exact keyword all through your content. You need to provide content that gives the answers and provides the services people want. If you can make this content, then you have taken the best step at ranking well within Google search.

 

Summary

 

Google crawls the internet. They store the data they find. When someone performs a search, Google uses their algorithm to determine the intent of the user and the pages that will best serve that user.

 

For your website, your job is to provide high-quality content that people are searching for. That means a mobile-friendly website that is usable on all browsers and a website that you’d like to visit. If you do this, you’ll be well on your way to ranking well within Google’s results.

 

Contact us if you’d like to get your website better rankings on Google search.

Exit mobile version