Skip to content


SEO begins with a robot crawling a website. During crawling, search engine robots open every page they have found a link to and fetch all the content from that page. You need to make sure robots are able to find all the pages and files on the website. As robots do find the pages and files, you also need to make sure robots can see the content on those pages and files. The resources below will help you find and fix crawling-related issues that are holding back your SEO performance.

Website Error Pages

When you access a web page that the website cannot find, the server that powers that website usually returns a status code indicating that the server was unable to find the file requested. That status code returned when a file is not found can vary, but is most commonly a 404.

Noindex vs. Nofollow vs. Disallow

Are you confused about the difference between noindex, nofollow and disallow commands? All three are powerful tools to use to improve a website’s organic search performance, but each has unique situations where they are appropriate to apply.

How to Check HTTP Response Status Codes

Every page on every website returns an HTTP response status code. How do you check the status code for your website’s pages? What do the status codes mean?

Mobile-First Indexing

Google typically crawls and evaluates the mobile version of a website first and uses what is found on the mobile website to decide where to rank the website in search results. How do you make sure your website works for mobile-first indexing?

Selecting a Website’s Canonical Domain

Should you use WWW or not in your URL? What about using an SSL certificate? In this blog post, learn what a canonical, or preferred, domain is and how this can affect your SEO and user experience.

SEO Discoverability

Before you can do anything else with SEO, you need Google to find your pages. This process of finding pages is called “discoverability”. Learn about discoverability and how to make your pages easier for Google to find.

Managing URL Parameters & Query Strings: SEO Best Practices

Parameters can create duplicate content, slow down website crawling, and disrupt rankings. What are the best ways to avoid these problems? In this article, we’ll review how to find and manage parameters on a website.

XML Sitemap

An XML Sitemap lets you share data about the pages on your website with search engines, including information about images and videos.

Subdirectory or Subdomain?

Should you use a subdirectory (subfolder) or subdomain for SEO and for the best user experience?

What Are HTTP Headers

Learn what HTTP Headers are, how to view HTTP Headers for your website, and which HTTP Headers can affect your website’s SEO performance.

Using Log File Analysis To Improve Your SEO Performance

You have to understand how Google’s bots are understanding and crawling through your website. Learn what log files are and how you can start analyzing your website’s log files to improve your SEO performance.

Avoid SEO Problems with Dev & Staging Sites

Dev or staging sites are essential when building a new website. But staging and dev websites can create problems for SEO. Find out how to avoid those problems!