Skip to content

Unhelpful and Low-Quality Content

By Matthew Edgar · Last Updated: March 07, 2024

Numerous problems can be found within the content of a website’s pages. Any of these problems can result in the page not ranking in search results. Collectively, these content problems are referred to as low-quality content. However, there are many different types of low-quality content that can exist. Some of the most common low-quality content problems include:

  • Unhelpful content: Google also evaluates the helpfulness of a website’s pages. A helpful page should include original analysis or insight that is comprehensive and detailed. If a page is unhelpful, it will contain confusing and unclear text. Unhelpful pages are hard for visitors to use.
  • Duplicate content: Duplication means you have two or more pages on your website that return similar content. This does not mean the content exactly matches. Duplication can also mean that two different pages serve the same purpose.
  • Outdated content: Some topics demand fresh information. Google’s algorithms take content freshness into consideration when deciding where to rank pages. News-related content is the classic example; Google will not rank older articles when somebody is searching for today’s news. However, this applies in many different industries as well and you’ll often see newer content ranking higher in search results. To be clear: the problem is not that the article is older, the problem is that the article is too old to be relevant.
  • Malicious or spam content: Typically, these are not pages intentionally added to a website but are injected into the website due to a hack. In other cases, malicious or spammy content appears because of user-generated content, such as spammy content on a blog or a forum. Depending on the severity, malicious and spammy content can prevent pages from ranking entirely or can simply suppress rankings.
  • Scaled content: Scaled content refers to a large number of pages that contain unoriginal and unhelpful content. These pages are added to the website to manipulate search result rankings. Typically, these pages have been generated by AI tools at scale or contain content scraped from other websites.
  • Irrelevant content: Irrelevant means that there is a mismatch between the stated topic of the page and the content of the page. If a page is supposedly about Topic A but the content doesn’t address Topic A at all, then that page will struggle to rank in search results.

To better understand low-quality content problems, I’ll review how Google evaluates content and how I have helped clients identify these types of problems on their websites. Also, I’ll discuss ways of addressing content quality problems that may exist on your website.

How Google Evaluates Content

We can understand how Google evaluates content by reviewing the Search Quality Rater Guidelines. These guidelines do not directly describe Google’s ranking factors. Instead, they describe the criteria Google asks human raters to use when evaluating the quality of search results. So, even though these are not exact ranking factors, it is widely understood that these guidelines emphasize what Google prioritizes. Websites that meet the criteria discussed within these Guidelines tend to rank better.

While I highly recommend you read the Guidelines in full, here are a few of the key points from the Guidelines and other Google documentation. In my experience working with clients, these are the most helpful points to keep in mind to understand how Google evaluates a page’s quality.

Main Content vs. Supplementary Content

To begin, Google breaks the page into three types of content: main, supplementary, and ads.

  • Main content (MC) is any content that helps the page achieve its purpose. People are coming to the website to see this content.
  • Supplementary content (SC) supports the main content but does not directly relate to the page’s purpose. It helps make the website usable, but this content isn’t why people are visiting the website.
  • Ads include advertisements and any other content or links added to the page for monetization purposes. They specifically reference advertisements and affiliate programs in the guidelines as examples.

All types of content are important and contribute to the user experience. However, only the main content of the page will be used to determine where to rank the page in search results. As a result, the main content needs to be reviewed carefully to ensure it delivers a high-quality experience.

Main vs Supplementary Content on a Website
Example of main content (MC) and supplementary content (SC) on Elementive’s website

Page Purpose

When evaluating websites, Google focuses on the main content’s purpose. The purpose is the reason the page exists. That purpose might be providing information, entertaining visitors, helping users complete a specific task, and more. Whatever the purpose is, Google wants to rank pages with “beneficial purposes” highly in search results. Beneficial simply means the page helps visitors do whatever they want to do as easily as possible.

A low-quality page’s purpose is unclear and vaguely defined. Visitors arriving on a low-quality page will be confused about what the page is for or what they should do on that page. It makes sense that Google would want to help prevent people from visiting those types of low-quality pages.

Page purpose is often discussed in the context of duplicate content. Two pages that serve the same purpose are duplicates. Even if the duplicate pages contain high-quality content, it will be harder for those pages to rank in search results. Google may choose to rank one version instead of the other or may choose to not rank either page. Learn more about finding and fixing duplicate content.


Google also evaluates the page against four factors: Experience, Expertise, Authoritativeness, and Trust. While these are discussed in Google’s Guidelines linked above, I also recommend Backlinko’s helpful guide to E-E-A-T factors to help understand each of these factors.

  • Experience: This refers to the first-hand or life experience of the person or organization creating the page. For example, a product review written by someone who has used the product will demonstrate more experience than a generic product review written by someone who has never used the product. Photos or videos can be included to help demonstrate the creator’s experience.
  • Expertise: Expertise demonstrates depth of knowledge and credibility within the topic being discussed. Often this also involves evaluating the background of the person or organization who created the page. Does that person or organization have the required knowledge and credentials to speak about this topic? For example, a legal website written by a lawyer will demonstrate more expertise than a legal website written by someone who never went to law school.
  • Authoritativeness: This is often discussed as the extent to which the creator of the content or the website itself is a go-to source for a given topic. In the retail space, there are go-to organizations for different types of products. For example, Newegg is a go-to source for electronics and REI is the go-to source for outdoor equipment. This can also apply to individuals instead of organizations. Authoritativeness happens through brand building and reputation management.
  • Trust: Google specifically says this is the important factor of the four—even if a go-to expert with plenty of experience writes something, that doesn’t automatically mean the content can be trusted. Along with presenting accurate information, trust also implies the website must be free of technical errors that would deter people from using the website. As well, trust is influenced by the other three factors. A page’s content cannot be trustworthy if it is not written by an expert, if it does not demonstrate experience, and is not authoritative.

A high-quality page will meet more of these factors than a low-quality page. High-quality websites will hire content creators who have the required experience and expertise, along with ensuring those writers only include correct facts and figures. High-quality websites will also build their brand and reputation so that they are considered a go-to source in their topic area. By doing all of this, high-quality websites will demonstrate they can be trusted and will often be rewarded with higher rankings in search results.

In contrast, low-quality websites won’t disclose who wrote the content. Low-quality websites have an unknown reputation with questionable credentials. Experience is not demonstrated through text, photos, or videos. Why would Google want to rank this content? Who would want to see this type of content in search results?

EEAT Diagram from Google Quality Rater Guidelines
EEAT Diagram from Google’s Search Quality Rater Guidelines, Section 3.4

YMYL: Your Money or Your Life

Closely related to E-E-A-T is the concept of YMYL, which stands for Your Money or Your Life. YMYL topics “significantly impact the health, financial stability, or safety of people, or the welfare or well-being of society”, according to Google’s guidelines. Google uses this to help screen out content that could cause mental or physical harm, impair financial security, or harm society.

While this is a broad definition, I’ve typically seen Google only apply this more narrowly. For example, only specific pages of a website that discuss YMYL may see more volatility following an algorithm update compared to other parts of the website.

With YMYL topics, Google will apply E-E-A-T factors more strictly and hold those sites to higher standards. For example, medical sites are often held to the highest standards, given the potential for greater harm to be caused by inaccurate information. I have typically seen websites in the personal finance space also held to higher standards for similar reasons.

Ultimately, you can think of YMYL as a spectrum. The more potential for harm to be caused by the presence of misinformation or malicious information on a website, the more likely the website will be evaluated against Google’s YMYL standards. When evaluating content quality, you should determine how much of your website’s content is on this YMYL spectrum.


When evaluating a page, Google also extracts the entities discussed in that page’s main content. An entity is a unique topic discussed in the content, such as a person, location, business, or product. To rank the page correctly, Google needs to understand not only what the entities are but also understand which entity is primary. Understanding the primary entity (or entities) will help Google understand the page’s main topic. You can learn more about entities in my article about entity recognition

It will be easy for Google to extract entities from high-quality content. High-quality content will demonstrate which entities represent the primary topics of the page. For example, a high-quality might review a specific product and make it clear to Google which product is being reviewed. Google could easily extract the entities related to that product from the page. The easier this extraction, the easier it is for Google to determine where to rank this page in search results.

In contrast, it will be harder for Google to extract entities from low-quality content. Low-quality content is muddled and unfocused, so it might not be obvious what entities this page is discussing. Without knowing the entities, Google would be unable to determine the primary topic of the page and, therefore, be unable to know where to rank the page.

Identifying Content Problems

Now that we have an idea of how Google evaluates content, let’s review different ways we can identify content-related problems on a website. There are several ways we can do this. I will review two of the most effective solutions I’ve used to help clients figure out where low-quality and unhelpful content exists on their websites.

Content Analysis

The first step I take in an audit is to get an overall look at what content exists on the website. I cover this process in detail in chapter 3 of my book Tech SEO Guide. As a quick summary, this involves:

  1. Assemble a list of all pages that exist on the website. It is important to pull pages from as many sources as possible. I typically pull pages from a site crawl, the XML sitemap, Google Search Console’s indexed and not indexed pages, backlink targets, and web analytics tools. You want to know about as many pages as possible, even if you aren’t sure if Google knows about those pages.
  2. Add data about each page. Next, you want to bring in as much data as you can about every page you find on this website. That includes things like the page’s title, header, canonicals, hreflang tags, indexability, and status code. It can also include the date the page was last updated, which can help identify outdated content.
  3. Add analytics data. Bring in as much data as you can about each page’s performance. This includes clicks, impressions, and ranking position data from Google Search Console and conversion or engagement data from analytics. Add in any other data that will help you compare pages.

After assembling the list of pages and gathering data, review the list to see which pages perform worse than others. This includes poor performance on traditional SEO metrics—for example, Google will rank low-quality pages lower in search results. It also includes looking for poor performance in conversion or engagement metrics—for example, visitors will not engage as much or spend as long on a low-quality page. You can also look for pages with few internal or external links—if a page isn’t being linked to, it could be because it contains low-quality content.

Poor-performing pages are usually a good place to start when trying to identify problematic content. However, this list will also highlight which pages are the website’s top performers. Those top-performing pages should be reviewed carefully to make sure they are as helpful as possible.

Unhelpful Content Assessment

After you have a list of pages to review more deeply from the content analysis, the next step is reviewing each page’s content individually to see if the page contains helpful, high-quality content. Google provides questions you can use to self-assess the helpfulness of your website’s content. These questions range from more complex questions, such as evaluating the page’s originality, to more basic questions, such as the page’s spelling and grammar. There are also questions to help you assess the expertise of the content creator.

The more questions that you can say “yes” to, the better the chances the page will be considered helpful. However, it can be difficult to assess your own content. We are all biased by the content we are closest to and sometimes are unable to see the flaws in the content.

To help avoid those biases, I highly recommend you incorporate questions about content helpfulness into usability testing. That way, you can get feedback from real users about whether they find the content helpful or not. I have another article about usability testing for technical SEO and questions about content quality could easily be added to that process.

Another way to avoid biases is to ask generative AI tools like ChatGPT, Gemini, Claude, or Copilot to evaluate your website’s content. In the example screenshot below, I asked ChatGPT to answer Google’s questions about a page on my website. This does not guarantee Google’s algorithms will agree the content is helpful. However, it is a useful way to identify what potential problems may exist.

ChatGPT Prompt for Content Helpfulness
Asking ChatGPT to answer helpful content quality questions

Fixing Content Problems

There are three basic options for fixing problems that exist on the website:

  • Remove the page
  • Rework the page
  • Noindex the page

Option #1: Remove the Page

One option is to delete the low-quality page from your website altogether. This is the best option for the lowest-quality pages on the website that have no helpful content to offer. If the page served no purpose, added no value for visitors, and contained unhelpful content, why keep it on the website?

By removing the low-quality content, you are also signaling to Google that you are actively working to improve your website’s quality. When I’ve helped clients remove unhelpful content, I’ve often seen larger gains in traffic—it is as if a weight has been lifted from the website.

Recovery will likely not be instantaneous though. Google has discussed how recovery from unhelpful content problems can take several months because Google wants to confirm that any content improvements are a long-term fix. This is also what I’ve seen working with clients. In many cases, it takes an algorithm update for Google to process all the content updates and adjust rankings accordingly.

When you remove these pages, you want the URL that previously contained helpful content to return an error message instead. Typically, this error message should return a 410 response status code. The 410 status indicates you have purposefully removed the page from the website. A 404 status code is acceptable too and works much the same way.

You do not want to redirect the unhelpful, low-quality content somewhere else. Redirecting makes it seem like you moved the unhelpful content to another location on your website. Instead, you want to clearly signal to Google that the unhelpful content has been taken off the website altogether.

When you delete the low-quality pages, don’t forget to remove all the internal links referencing those low-quality pages. This includes removing any references to those pages from the XML sitemap. Part of how you are communicating to Google (and visitors) that these pages are removed is by taking away as many references to these unhelpful pages as you can.

Option #2: Rework the Page

A harder solution, but one that can work quite well, is reworking the low-quality page to make it more helpful and more purposeful. This helps visitors and search engine robots find more value in the page.

This is typically the better option for pages that are close to being helpful but aren’t quite good enough in their current state. For example, outdated content is not helpful, but a refresh of the page can fix this by adding updated information. Reworking the page is also a good option for duplicate content where you can consolidate duplicate pages into a single page and redirect the duplicate content URLs to the URL for the new, consolidated page.

Reworking the page doesn’t necessarily mean writing more text, though it can. It could mean that you need to add more videos or images to the page instead since videos or images can convey information in different ways. Another option is to add new features to the page—like a calculator or quiz. To figure out how to rework the page, you want to step through the page to see what additional questions people may have about the page’s content. Then, find the best type of content to address those questions. You can also review competitor websites to get an idea of what other companies are saying about this subject.

One special scenario is forums (and other types of user-generated content). If this is the source of your unhelpful content problems, you can still find ways to expand on this content to make it more valuable and purposeful. One route is to encourage users to help you expand on existing content, possibly by gamifying your forum to encourage better answers (why do you think so many forums offer points and tier levels in exchange for better questions or answers?). Instead of reworking the content, you can also rework the forum rules to remove unanswered or poorly answered questions.

Option #3: Noindex Unhelpful Content

Another solution is to add a meta robots noindex tag to low-quality or unhelpful pages. You can learn how to use the noindex tag in my other article but this will instruct robots to ignore the page and not include it when evaluating site quality.

This is the ideal solution for pages that are high-quality for visitors but low-quality for people searching out your website on Google. For example, on-site search pages may be helpful for people who are already on your website. However, the on-site search pages may not offer much value for new visitors arriving from search results. In my experience, though, this is a rare category of page to find on the website—if the page is valuable, it usually should rank in search results.

More commonly, this option is used as a short-term measure. Reworking or removing the page can take time because it often requires extensive work from writers, designers, developers, and more. While that work is underway, you can add a noindex to the pages. Adding the noindex is often a very simple project in comparison. This noindex will signal to Google that specific pages should be ignored as part of their site evaluation. Then, once the rework or removal is complete, you can remove the noindex and Google can process the bigger change to the page.

It is important to not block crawling of unhelpful or low-quality content. When you block crawling, via a disallow on the robots.txt file, Google may still know about the page and can use external signals about that page to decide what it means for overall site quality. Those external signals could indicate the page’s content is even lower quality than it is. Google cannot crawl it to check if those external signals are correct. Instead, if you let Google crawl a page with low-quality content, you can control how Google processes the page via the noindex tag. For this reason, Google recommends only blocking images or videos via the robots.txt disallow and using the noindex tag for all other types of content.

Avoid Low-Quality Content Moving Forward

Fixing low-quality and unhelpful content is not a one-time project. You need to continually monitor this going forward to ensure content problems don’t resurface. One way to do this is to test the site content thoroughly after site updates to make sure no low-quality content was introduced. For example, a code change could turn on automatically generated pages that contain duplicated, unhelpful content.

Another way to avoid unhelpful content moving forward is to train everybody involved in content generation. I’ve often seen newly hired copywriters add low-quality content to the website simply because they didn’t know any better.

Part of that training includes evaluating new pages before they are added to your website. I’ve helped clients establish processes to confirm that the new page is highly purposeful and sufficiently addresses the topic it covers. You can use the helpful content questions from Google discussed earlier as one guide for this.

You can also enforce rules on the website. I’ve found this is most helpful for user-generated content. For example, you want to make sure rules are added to your blog comments to remove spammy or poorly written responses. You also want to make sure these rules are continually enforced.

Final Thoughts

Unhelpful and low-quality content harms your website’s user experience. The lower the content quality, the less it helps anybody visiting your website and the more it will hurt your SEO efforts. Addressing low-quality content requires continual evaluation to ensure the content serves a purpose for visitors and offers helpful information. If you need help reviewing your content to identify and fix unhelpful content issues, please contact me.

You may also like

How To Fix 404 Errors On Your Website

Finding and fixing 404s errors on your website is important because not-found errors create a bad user experience. In this guide, you will learn how to find and fix 404 errors on your website.

Google Search Console: Page Indexing Report

Is Google indexing every page on your website? If not, why not? To answer these questions, you need to use the Page Indexing Report in Google Search Console. Find out how to use and understand this report.

Noindex vs. Nofollow vs. Disallow

Are you confused about the difference between noindex, nofollow and disallow commands? All three are powerful tools to use to improve a website’s organic search performance, but each has unique situations where they are appropriate to apply.