What Are HTTP Headers
By Matthew Edgar · Last Updated: December 16, 2022
When a URL is typed into a browser (or when a search engine robot crawls a URL), a request is made to a website server. As part of this request, the browser or robot sends additional information to the server about the request in a text field. The server then sends back a file, such as a web page or image, in response to that request. Along with sending the contents file back, the server also sends additional information about the file requested in a text field. The text fields sent and received by the browser, robot, and server are referred to as HTTP Headers.
There are two main types of HTTP Headers: request and response.
- Request headers are sent by the browser or robot to the server (as part of the request for a file). This includes sending information about the user agent (in the User-Agent header) or sending authentication information (in the Authentication header).
- Response headers are sent by the server to the browser or robot (as part of the response to the request). Response headers include more information about the file returned, including information about the file’s status code or the file’s last modified date (in the Last-Modified) header. Some response headers can be classified as representation headers if the purpose of the header is to provide metadata about the file, like the file’s encoding.
Jump Ahead
How to View HTTP Headers
Viewing HTTP Headers In Chrome

HTTP Headers can be viewed in Chrome’s Developer Tools. After loading a page in Chrome, right click and select “Inspect”. On the Developer Tools panel, go to the Network tab and make sure it says “Recording network activity”. With the Network tab open, refresh the page in Chrome. The Network tab will now populate with all the resources required to load this page.
Click on the first item listed, which will be the URL of the page you are viewing. This will load a new panel with information about the page, including a tab in that panel containing the page’s HTTP Headers. Chrome Developer Tools will show both the Request and Response headers, which can be helpful for debugging any issues that might exist within the page.
Along with viewing the HTTP Headers for the main page, you can use Chrome Developer Tools to view the HTTP Headers for the other resources required to load this page. For example, it can be useful to view HTTP Headers for images, JavaScript, or CSS files to better understand how those files are loaded.
Viewing HTTP Headers in Google Search Console

You can view HTTP Headers in Google Search Console using the URL Inspect feature for any page with a status 200 (which is any normal page that does not redirect or return an error). In your Google Search Console account, enter any URL on your website into the search field at the top of the page. This will load the data Google has about that URL if the URL has already been crawled by Google. If no information is found for this URL, you can click Test Live URL to have Google fetch the page.
Once the URL Inspect page loads, clicks “Viewed Crawled Page” (or if you have run a live test, click “View Tested Page”). On the panel that expands, click the “More Info” tab, then click “HTTP Response”. This will show the HTTP Headers Google received on the latest crawl of the page, which can be interesting to check against the live HTTP Headers seen on the site.
Along with showing the HTTP Headers found on the latest crawl, Google will also add additional information to the headers providing a bit more context about how Google has crawled the page. This includes X-Google-Crawl-Date, which shows the last date Google crawled the page.
SEO Specific HTTP Headers
There are also SEO-specific HTTP Headers that can be used to define robot directives and to define the page’s canonical tag. These headers can be used in place of the HTML tags and are often used on non-HTML content, such as PDF files. Note that these headers are only used by Googlebot and Bingbot but will not be used by any other robots or browsers.
X-Robots-Tag
The X-Robots-Tag header operates like the meta robots tag, defining how links contained within the file should be crawled (follow or nofollow) and if the file should be indexed (noindex or index). Other directives can also be specified in the X-Robots tag, including whether Google is permitted to create a snippet for the file (via the nosnippet or max-snippet directives) and whether images contained in this file can be indexed (via the noimageindex directive).
The X-Robots-Tag is typically defined in the .htaccess file on Apache servers. For example, this code would apply a noindex via the X-Robots-Tag header to every PDF file on the website:
<files *.pdf>
Header set X-Robots-Tag "noindex"
</files>
As another example, this code in the .htaccess file would set a noindex for the file example.pdf contained in the /uploads/ directory.
<If "%{REQUEST_URI} =~ m#^/uploads/a-specific-file.pdf#">
Header set X-Robots-Tag "noindex"
</If>
Link rel = “canonical”
The rel = “canonical” HTTP Header works exactly like the canonical specified in the element for Googlebot. It is not clear if Bingbot supports this header. Whether specified in the HTML head or in the HTTP Header, the canonical tag helps to resolve duplicate content by defining a canonical or preferred URL.
The rel = “canonical” HTTP Header is defined with the Link HTTP Header, which requires a URL and additional parameters (in this case, the parameter will note that this link contains the canonical URL). This is typically defined in the .htaccess file on Apache servers. For example, this code in the .htaccess file defines Elementive’s about page as the canonical URL of this document.
<If "%{THE_REQUEST} =~ m#about-elementive.docx#">
Header set Link '<https://www.elementive.com/about-elementive/>; rel="canonical"'
</If>
Here is an example of the headers for this doc file in Google Search Console, including the Link containing the canonical in the HTTP Headers.

Common HTTP Headers
There are many HTTP Headers (view the full list at MDN), but there are a few that are of particular importance for SEO.
Location
The Location HTTP Header is required for any redirect. This is where the server indicates where to redirect the visitor. For example, in the screenshot below, the Location HTTP Header tells robots and browsers that the requested URL (https://www.matthewedgar.net/technical-seo/) has been moved to a new URL (https://www.matthewedgar.net/tech-seo/).

Retry-After
When a temporary server error occurs, the server should respond with a 503 response code. Because the 503 response is a temporary error, the server should communicate when a browser or robot should retry accessing the page. This can be done in the Retry-After response header. The same is true for a 429 response code, indicating too many requests have been made and the browser or robot making those requests must wait some interval of time before requesting again. The Retry-After should be set to a date or a number of seconds to wait.
Content-Encoding
One of the best ways to improve a website’s speed is to compress the website’s files. However, a browser or robot will need to decompress (or decode) those compressed files in order to appropriately read the file. As a result, the server needs to tell browsers or robots requesting files how those files have been compressed (or encoded). This is done with the Content-Encoding HTTP Header. Two of the more common encoding methods are Gzip (represented as gzip or x-gzip) and Brotli (represented as br).
Related to Content-Encoding is the Transfer-Encoding header. You’ll sometimes see the Transfer-Encoding header in Google Search Console. The Transfer-Encoding header also indicates what encoding the server has used but the Transfer-Encoding is hop-by-hop, meaning it indicates encoding used through each step (or hop) of the connection. Content-Encoding is end-to-end, meaning it indicates encoding used across the entire connection.
Cache-Control (and Expires)
The Cache-Control header defines how, or if, the resource should be cached. Caching a file helps decrease the website’s load time, helping the website load faster.
There are multiple ways this header can be set. If a file should not be cached at all, the Cache-Control header can be set to no-store. The Cache-Control header can also be set to no-cache, which does allow caching but only if the browser checks for updates first before using the cached file.
If the file is cached, it is important to tell the browser how long the resource should be cached. This is done by defining max-age in the Cache-Control header. For example, the font file shown in the image below is cached with a max-age of 31,536,000 seconds (one year). This tells browsers that the font file can be kept in the cache, and not re-requested from the server, for up to one year. Importantly, Google has indicated that Googlebot does not use this header.

The Cache-Control header was defined in HTTP/1.1. Previously, the cache timing was defined in the Expires header. The Expires header specified the date after which the browser should consider the file to have expired; this allowed browsers to cache the file up to that date. In the screenshot above, the Expires header is set to one year from the date the resource was first loaded (matching the max-age specification). If a Cache-Control header is set, then the Expires header will be ignored but it can be helpful to specify the Expires header since older browsers may not fully support the Cache-Control header.
Content-Type
The Content-Type HTTP Header tells browsers and robots what type of file is being delivered. It is from this information that robots and browsers know if a particular file is an image, an HTML document, a JavaScript file, a CSS file, or a video. The screenshot below shows the HTTP Headers for the logo images on MatthewEdgar.net, which uses a WebP image file type, and Elementive.com, which uses a PNG image file type. CSS files will show “text/css” and JavaScript will show “application/javascript”.

Content-Language
When requesting a page, browsers or robots can optionally send an Accept-Language request header. Severs can then respond with the Content-Language header that indicates the selected language. While most browsers support this header, Googlebot does not crawl with accept-language headers. For SEO purposes, different page languages should be specified with hreflang tags. However, it is best practice to have the Content-Language header, if used, set to the appropriate language for each page to avoid any confusion about what content is contained on the page.
Final Thoughts
It is important to review what HTTP Headers are returned on your website and to understand what those HTTP Headers are communicating about your website’s pages to browsers and bots. If you need help reviewing the HTTP Headers on your website or need help with other technical SEO issues, please contact me.
