JavaScript & SEO
By Matthew Edgar · Last Updated: October 07, 2021
One of the more rapidly evolving aspects of SEO is how bots can handle JavaScript. While Google has been able to execute some JavaScript for years, Googlebot’s abilities have reached new heights over the last couple of years. This is welcome news given how many websites rely heavily on JavaScript to enhance their website’s content but to load their website’s content too.
While you can use JavaScript more now, without nearly as much risk to your SEO work, this doesn’t mean you should use JavaScript for everything. There are still some things you need to keep in mind to make sure your JavaScript setup will work correctly for Google’s organic bots who are visiting your website. In this article, let’s review the basics of handling JavaScript for optimal SEO performance.
How Content is Loaded
Before we review how Google approaches JavaScript, let’s take a step back and discuss the different ways you can load content into a web browser. The process of loading content into the browser is known as rendering.
One of the most fundamental (and common) methods of rendering is Server-Side Rendering. As the name suggests, the rendering happens on the web server. This means when a visitor requests a page from your website, your web server will pull together the necessary HTML code and send that code to the browser. The browser doesn’t have to do anything more to the HTML code. Instead, the browser simply has to convert the HTML into a visible web page. Note that you will often hear server-side rendered code referred to as the “raw” code.
Next, we have Client-Side Rendering. With client-side rendering, the server still sends back some amount of HTML code when a visitor requests a page. However, the HTML code that the server sends back is not the entirety of that web page. Instead, the HTML code sent back by the server includes references to JavaScript code as well. The browser must execute this JavaScript code to fully load the page. Executing the JavaScript code happens on the visitor’s browser—and the visitor’s browser is known as the “client”, hence the name “client-side” rendering. Note that for simplicity, client-side rendered code is often referred to more simply as “rendered” code.
The JavaScript that is executed by the browser (or client) can be triggered via different types of events that happen within the browser. Those events include a web page fully loading, a click or tap on a link, scrolling, mouse movements, and more. You can see a complete list of events that can trigger JavaScript code.
Typically, what we’re referring to when we discuss client-side rendering is a load event. This event is triggered when the HTML has been loaded into the browser. Once the load occurs, you can execute JavaScript code to alter something about the website’s appearance.
In some cases, the JavaScript code can add additional content to the page. In the following example, the JavaScript code adds extra text to the page once the page is loaded. Here is what the output (left side) and code (right side) looks like when it is returned from the server:

Then once JavaScript loads and is executed, extra text is added:

In other cases, the HTML sent back from the server only contains an empty shell. The JavaScript executed on the load event fills in that empty shell with the entirety of the website’s content. When you view the server-side source, you will see blank content:

Then, once JavaScript loads and executes, the page’s content will appear:

While using the load event is the most common, you can also load content into the browser with different types of events. Let’s say you have content presented in tabs. Those tabs might be empty containers initially but once a visitor clicks on the tab, the JavaScript code loads content into the tab. Something similar could happen on scroll. Think of websites that use infinite scrolling. The content lower on the page isn’t loaded until you begin to scroll further down the page. You could also load content once a visitor moves their mouse or hovers over an object (though, for obvious reasons, this doesn’t work on phones or tablets). For example, Pinterest uses this to load images as you scroll down the page—in this animation, you can see how colored placeholder boxes are filled in as your scroll through the page.

What Can Google Support
The best way to ensure Google’s bots can see your content is to use server-side rendering. Because all of the HTML (and the content contained in that HTML) is processed on your web server, there is no JavaScript involved and the browser doesn’t have to do any additional work to display the content. This also means that server-side rendering is the safest bet to ensure all of your human visitors are able to see your website’s content and you avoid any issues with a visitor’s browsers failing to execute JavaScript correctly.
All right, let’s bring JavaScript into the mix. How does Google use JavaScript to load your content? The first step is you want to make sure that Google can access your JavaScript files. You should not block any JavaScript files required to load your website’s content via robots.txt (or any other method).
The safest bet when using JavaScript is to have the content appear on load. Google’s bots use a headless browser that is capable of executing JavaScript. This means bots are viewing your website’s content similar to what visitors using modern browsers see when first loading a page. As a result, Google’s bots will see content loaded into the browser via a load event. When testing this out and observing content loaded this way via JavaScript, I’ve seen very few issues with Google missing content (and when they do miss content, it is usually because the JavaScript files were blocked from bot access).
Google cannot support other types of events. Bots won’t scroll or move a mouse, so any content that loads into your web page in response to those types of events will not be visible to Google’s bots. As well, Google doesn’t follow click events (bots “click” on links contained in regular links but they won’t trigger JavaScript on this “click”). Google will struggle to see your content if you are relying on these different types of events.
Let’s say, for example, you have text contained within tabs. The text contained in the tabs is not loaded as part of the server-side or raw code. Instead, that text is only loaded when a visitor clicks on the tab. In this example, clicking the tab keeps you on the same page and loads new content into that same page. Visitors will be able to see this text if they click on the tabs because, in modern browsers, they will be able to trigger the JavaScript code connected to the click event. But bots can’t click on those tabs, as a result, bots won’t trigger the JavaScript code that loads in that text; Google’s bots won’t be able to see this text and this text won’t do anything to help your website rank.
What If Google Can’t Execute Your JavaScript?
What do you do if Google can’t see the content you are loading via JavaScript? One option is to convert your code so that it loads your content via server-side rendering instead of client-side rendering. In that tab example, instead of the text loading in response to clicking on the tab, you could load the text within the server-side code and hide it from view until somebody clicks on the tab. However, changing to server-side rendering may not be a practical alternative, so let’s discuss other alternatives you can use if Google can’t execute your website’s JavaScript.
Noscript
A <noscript> tag contains content that should be shown to visitors who have JavaScript disabled. As a simple example, you can see the <noscript> tag on line 8 in this code. Any visitor who loads the website without JavaScript enabled will see “No JavaScript.” written to the screen.

While that is a simple example, and an example where even Googlebot could execute the JavaScript, the <noscript> tag becomes more important when there is JavaScript that Googlebot cannot execute. Take for example content that is loaded into the browser when a visitor scrolls down a page. As we discussed above, bots won’t be able to see this content because bots don’t scroll. However, you can place the content shown after scrolling within a <noscript> tag. That way bots still can see that content.
One common instance of this is with lazy-loaded images. With lazy loading, images aren’t loaded into the website on load. Instead, images are only loaded when they appear on the screen. You need to use JavaScript to detect when an image should be loaded into the browser. But if bots don’t scroll, or don’t execute the JavaScript, they may not see these images. So, instead, the best practice is to include the regular image in a <noscript> tag. Learn more about making lazy load SEO friendly.

Dynamic Rendering
With dynamic rendering, you render the content differently for bots than you do for human visitors. Human visitors will continue to see content loaded via client-side rendering. However, bots won’t see the client-side rendering and instead will see “pre-rendered” content. Basically, you are rendering your website’s content in advance so that bots don’t have to do the rendering work on their end. By doing this, you have more control over how everything is rendered and over what bots are able to see, ensuring that Google’s bots won’t load part of your website incorrectly.
To generate pre-rendered content, you pass your HTML and JavaScript code through a dynamic renderer and this renderer returns the rendered output in the form of static HTML. Anytime you detect a specific bot, such as Googlebot, visiting your website, you load this static HTML. The diagram from Google’s Developer Guidelines (see below) gives an idea of how this works with the server detecting whether it is a browser (human) or crawler (bot) making the request and changing what content is delivered.

Source: Google
When using dynamic rendering, it is important that the content seen by bots is equivalent to the content seen by humans. There will likely be a few differences, but you want to make sure the page’s seen by bots and humans serve the same purpose and intention. If you use dynamic rendering as a way of sneaking in highly optimized content that only bots can see, Google will likely catch on and your site could receive a manual action.
Final Thoughts
You can and should use JavaScript to enhance your website. Doing so can make your website more engaging and interesting for visitors. However, when using JavaScript, you want to make sure that Google’s bots are able to see everything all of the content website has to offer. This requires understanding what content on your website is loaded via server-side and client-side rendering and, for client-side rendered content, understanding if it is loaded in a way that bots can support. If you have questions or need help, please contact me.