Glossary of Technical SEO Terms
By Matthew Edgar · Last Updated: October 20, 2023
The technical side of SEO is full of jargon, acronyms, and technical words. I have assembled this glossary to help make sense of all these words and phrases. Each term in this glossary includes a simple definition to help you get a baseline understanding of the terms and their meaning, as well as links to additional information. Learning these terms and having a better understanding of what they mean will help improve your technical SEO work.
If there are other terms that you have heard or would like to see added to this list, please contact me.
Accelerated Mobile Pages (AMP)
AMP is a framework developed by Google and others to create faster mobile pages. In most cases, AMP creates a separate website alongside the main, non-AMP website.
Algorithm (Algo, Algo Updates)
An algorithm is a set of instructions programmed into a computer to solve a particular problem. Search engines have a variety of algorithms used to fetch, transform, evaluate, and rank data from websites. Google regularly updates its algorithms with algo updates. Those algo updates can change specific factors targeting particular types of websites or can more broadly affect all websites (or, often, a combination of broad and specific changes). When algorithm updates are released, you need to monitor for impact and respond to the algo update accordingly.
Bing Webmaster Tools
Bing Webmaster Tools (BWT) is a free tool from Bing that provides details about how a website performs in Bing’s organic search results. Bing Webmaster Tools also includes tools to evaluate a website, including a Site Scan, which can be a helpful way to spot problems on the website. Learn more or set up Bing Webmaster Tools for your website.
Bot (Robot, Spider, Crawler)
The term “robots” is a convenient way to refer to a complex collection of different programs (or algorithms) that search engines use to understand and evaluate websites.
Canonical URL
A canonical URL is the official, or preferred, version of a URL. The canonical URL is defined in a link tag contained in the head of the HTML document. This can help resolve duplicate content. Similarly, there is also the concept of a canonical domain, which is the preferred and official version of the website’s domain (with or without www, with or without https).
Click (SERP Click)
In Google Search Console, a click represents the number of people who clicked to a page on the website from the search engine result page (or SERP). This includes clicks on various search features but does not include clicks on ads. This number is deduplicated so multiple clicks from the same search result will only show as a single click.
Core Web Vitals (CWV)
Core Web Vitals is part of the Google Page Experience Algorithm and is designed to measure three aspects of a website’s user experience related to speed: First Input Delay (FID), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS). Starting in March 2024, Google will include Interaction to Next Paint (INP) as a replacement for FID.
Crawl
Crawling is one of two primary operations for search engine robots. A robot is designed to crawl every URL it finds unless methods are used to prevent crawling. During a crawl, the robot fetches the content from a known URL and saves that content for further processing by the search engine. The primary SEO crawling goal is to ensure a robot can successfully find everything it should while finding nothing it shouldn’t.
Crawl Budget
Crawl budget can mean two different things. First, it can represent the ratio between two different numbers: the number of files that a search engine robot has crawled on a website and the number of files that a search engine robot could crawl on a website. This ratio indicates if the bot is finding every page on the website during a crawl.
Second, crawl budget can also refer to the load on the website’s server. How much capacity does the server have to withstand crawls from bots? Of that capacity, how much is the bot using? On most servers for websites, this is typically not a concern.
Cumulative Layout Shift (CLS)
Cumulative Layout Shift (CLS) measures visual stability. If elements move or shift around the page unexpectedly, that can significantly disrupt the visitor’s experience interacting with the website. If elements shift in response to a visitor’s interaction with a page, that will not present a CLS problem, provided a user understands the shifting is in response to their interaction.
Disallow and Allow
The disallow is specified within the robots.txt file. With this command, the website is stating that robots are not allowed to crawl this file or directory. In contrast, there is an allow command that states that robots are allowed to crawl a particular file or directory. Note that the disallow “command” is more of a “disallow” suggestion that bots do not have to follow. Googlebot will typically respect the disallow statement but not always.
First Input Delay (FID)
First Input Delay (FID) measures how quickly the website is usable. To be considered good, a website’s FID must be under 100 milliseconds. This does not mean that the website needs to fully load within 100 milliseconds only that when a visitor first attempts to interact with the website, the website needs to respond to that interaction within 100 milliseconds. This is currently a Core Web Vitals metric but will be replaced by Interaction to Next Paint (INP) in 2024.
Google Search Console
Google Search Console (GSC) is a free tool from Google that provides information about how Google crawls, indexes, and ranks websites in organic search results. Learn more in my video series about Google Search Console or set up Google Search Console for your website.
Helpful Content
As part of the algorithm, Google evaluates the helpfulness of content on a website. With this evaluation, Google programmatically determines if a page will be helpful for visitors or not. There are a variety of factors Google looks at as part of helpfulness including originality of content, quality of production, user experience of the page, and more. Their support articles provide questions to evaluate your website’s content to determine if your content is more likely to be considered helpful.
hreflang
A website may translate some pages into different languages to reach visitors speaking other languages or located in different countries. This results in multiple pages on the website. For example, a product page may be available in American English, Britsh English, French, and Spanish. The hreflang tag explains how these pages are related to Google. The hreflang tag can be provided on the page itself in a <link> tag or on the XML sitemap. Learn more about hreflang in Google’s guidelines.
HTTP Header
HTTP Headers are text fields sent by the browser to the server and by the server to the browser. The HTTP Headers contain instructions about the requested file, including when the requested file was last updated, how that file is encoded, how to cache the file, and more.
HTTPS (SSL)
HTTPS stands for Hypertext Transfer Protocol Secure and secures the connection to the website. With HTTPS, data is secured when sent to and retrieved from the server. Creating a secure connection with a website requires an SSL certificate. SSL stands for Secure Socket Layer and is the security protocol that establishes the secure link between the browser and the server. As of 2014, using an SSL certificate is a ranking factor for Google, which means having an SSL will likely help a website rank higher in search results.
Impression (SERP Impression)
In Google Search Console, an impression represents the number of times people saw a website listed on search results, not including any ads. This can be viewed on the Google Search Console performance report.
Index
Indexing is the second major operation of search engine robots. While crawling, robots add all the files and information found to a database and after the crawling is complete, robots (using algorithms) decide how to organize, or index, all the files found.
Interaction to Next Paint (INP)
Interaction to Next Paint (INP) is a new Core Web Vitals metric Google will begin using in 2024. It replaces First Input Delay (FID). INP measures how long it takes the webpage to respond to a visitor’s interaction. INP evaluates all interactions on a webpage but only the longest interaction time will be used as the page’s final INP value. To be considered good, INP must be less than 200 milliseconds.
Largest Contentful Paint (LCP)
Largest Contentful Paint (LCP) measures when the largest element on a page is rendered (displayed) in the browser. To be considered good, the largest element needs to load within the first 2.5 seconds. The largest element on a page typically contains the main content of the page, such as a key image or the main block of text. As a result, the longer a visitor must wait for that element to load, the worse the overall experience.
Main Content (MC)
This is a type of content defined in Google’s Search Quality Rater Guidelines (SQRG or QRG). Main content represents any content that helps the page achieve its primary purpose. Main content, or MC, is the content Googlebot will primarily use to determine where to rank a page in search results and is the most critical content for Googlebot to see across all devices.
Manual Action
Google has human reviewers who monitor websites for attempts to manipulate search results (or appears to manipulate search results). If a human review detects that something about a website is manipulative, then it applies a manual action to the website and will notify the website owners about this action in Google Search Console. Once corrective action is taken by the company running the website, a manual action can be reevaluated by Google and lifted if the issue is corrected. A manual action can cause pages to fall out of search results or rank at much lower levels than before.
Meta Tag
A meta tag is located in the head of an HTML document and contains information describing the nature of the information presented on this particular page. This can include a description summarizing the page’s content (meta description), the author of the page, the character encoding, the viewport size, how robots are to interact with the page (meta robots), and more.
Mobile First
As the name suggests, mobile-first indexing means Googlebot crawls and evaluates the mobile website first and uses what is found on the mobile website to decide where to rank the website in search results. Googlebot will still crawl the desktop website as well but usually not as often as they crawl the mobile website. What their robots find on the desktop website may not influence what is ranked in search results as much.
Nofollow (Meta Robots Nofollow)
A meta robots nofollow instructs robots to not crawl any of the links on that page. If nothing is specified in the meta robots tag, then robots will assume crawling links is an allowed operation.
Note: Be sure to see rel nofollow below!
Noindex (Meta Robots Noindex)
A meta robots noindex tells robots they can crawl the page but are not allowed to index the page. If a noindex is not specified, robots will assume they can index any page found. To prevent indexing (and, therefore, ranking), the content attribute of the meta robots tag can be set to noindex.
Not Found (404, 410)
A file a robot or human visitor is attempting to access on a website that cannot be found on that website is referred to as a not-found error. Sometimes this is referred to as a 404 error, which is derived from the status code that is commonly returned by the server when a requested file cannot be found. It can also be referred to as a 410, which is another status code that is commonly returned for not-found files.
Noscript
A <noscript> tag contains content visitors or robots should see if they cannot use JavaScript to load the page. If main content (the content critical to understanding a website) can only be loaded via JavaScript, using a <noscript> tag can help make a page more usable and accessible.
Quality Rater Guidelines, Search Quality Rater Guidelines (QRG or SQRG)
Google has hired thousands of employees to rate websites. While the ratings provided by raters do not influence rankings directly, the ratings do influence the algorithms used to rank websites. Google provides a series of guidelines that these raters use to evaluate sample websites called the Search Quality Rater Guidelines or sometimes more simply referred to as Quality Rater Guidelines.
Rank (Ranking Position)
After crawling and indexing a website, search engines will review the information extracted from each website contained in the index to determine where the website ought to appear, or rank, in search results (if it should appear at all). If and when a website ranks, it ranks a particular position on the SERP.
Redirect (301, 302)
A redirect sends visitors, human and robot alike, from one URL to another. The URL redirected from is called the redirect source or origin and the URL redirected to is referred to as the redirect destination or target.
Redirect Chain
A redirect chain is where a URL redirects multiple times before arriving at the destination. Each redirect in the chain is called a hop. Robots waste resources crawling through redirect chains and may simply stop following the chain after a certain number of hops, meaning the robots may not locate the final page in the redirect chain.
Redirect Loop
A redirect chain that circles back onto itself is called a redirect loop. No destination can be arrived at by following the redirects, meaning visitors will be unable to access any pages within the redirect loop. Robots will waste crawl budgets and human visitors will see an error message in their browser.
Rel Nofollow (Rel Sponsored, Rel UGC, Link Qualifiers)
Every link can be qualified within the <a> tag’s rel attribute. The main purpose of these qualifiers is to explain the nature of why a given link is included on a page. Not every link needs to be explained. However, links with monetary relationships or links users generated by users should be. The rel=”sponsored” qualifier is for any paid link and rel=”ugc” indicates the link is part of user-generated content. The nofollow qualifier can still be used either alongside or instead of the sponsored and ugc qualifiers, though it is not as descriptive.
Render (Client-Side Rendering, CSR, Server-Side Rendering, SSR)
The process of loading content into the browser is known as rendering. Rendering can happen on the server or in the browser (the client). With server-side rendering (SSR), the server pulls together all the HTML code and sends the code to the browser. The browser receives that HTML code as delivered and it displays it as a web page, with no further work required by the browser. With client-side rendering, in contrast, the browser receives JavaScript code along with HTML code from the server. The browser displays the HTML code but also executes the JavaScript code that manipulates the page’s content and design. Learn more about how bots process JavaScript code, including the difference between client-side and server-side code.
Robots.txt
The robots.txt file is a plain text file located in the website’s root directory. The robots.txt file can contain disallow and allow commands specifying instructions for how bots should crawl the website.
Schema (Structured Data)
Schema markup offers a way to structure information contained on websites. Most information provided on a website is in an unstructured format meaning there is no way for a machine to easily know what the text on a page is. Schema markup provides more structure to the content, allowing robots to know what the content contains. Google will use some types of schema markup to enhance search result listings.
Search Engine Marketing (SEM)
Search Engine Marketing (SEM) typically refers to the work involved in getting traffic to a website from paid ads on search results. However, SEM can also be a broader term encompassing the work of getting traffic to a website from organic search listings as well.
Search Engine Optimization (SEO)
Search Engine Optimization (SEO) refers to the work involved in getting traffic to a website from organic search result listings.
Search Engine Results Page (SERP)
The page seen after conducting a search lists a variety of results for the search. This is referred to as the search engine results page or SERP. The SERP contains a variety of search result listings. Those listings include web pages, images, and various features added by the search engine.
Status Code (HTTP Response Status Code)
Along with returning content for a requested page, the web server also returns a numerical code that indicates the page’s status, called an HTTP Response Status Code. The status code says if the page is operating correctly, is in error, requires authentication, and more.
Supplemental Content (SC)
This is a type of content defined in Google’s Search Quality Rater Guidelines (SQRG or QRG). Supplemental content represents all the other content on a page that isn’t main content. This might be additional content, calls to action, or content that helps people navigate the website but supplemental content, or SC, is not critical to understanding the page’s main purpose. Note that Ads are not considered part of Supplemental Content and are a separate content type defined in the SQRG.
Time to First Byte (TTFB)
Time to First Byte (TTFB) measures how long it takes from when a URL is requested to when the first bit of information is returned from the server. This is highly correlated with rankings in search results.
Title Tag
The title tag contains the main name of the page. This isn’t displayed to visitors on the page itself, but you can see the title tag at the top of the tab in your browser and the title tag is also used by Google in search results.
Total Blocking Time (TBT)
Total Blocking Time (TBT) measures how long a web page is “blocked” and unable to respond to user interactions like clicks or taps because the browser is busy executing JavaScript, rendering content, or handling other operations. Lower TBT values indicate a more user-friendly page. TBT is often higher when Interaction to Next Paint (INP) and First Input Delay (FID) are higher.
User Agent
When a browser or robot requests a file from the server, it tells the server who it is making the request by providing a User Agent. The User Agent is part of the HTTP Headers. When Google crawls a website, it uses the User Agent “Googlebot” (see full user agents used by Google). For human visitors, the User Agent will include information about the browser and the operating system. The server can be programmed to respond differently based on the User Agent.
X-Robots
Commands to guide robots can also be specified in the HTTP header using the X-Robots Tag. This operates like the meta robots tag, allowing for control over indexing with the noindex statement and control over crawling links contained on that page via nofollow. This is typically used on images or PDFs where the HTML meta tag cannot be used.