Did you know that while AhRefs blog is powered by WordPress, much of the rest of the site is powered by JavaScript like React?

Many sites utilize JavaScript to make the experience more enjoyable for the user and to add interactivity.

Some people use it to create menus, take information about products or prices from different sources, receive content from multiple places, or in rare cases, for all the content on the website. JavaScript is everywhere on the web nowadays.

JavaScript SEO

JavaScript SEO is an area of Technical SEO that works to ensure that webpages that rely heavily on JavaScript are properly crawled and indexed by search engines, while also making them more friendly to searches. The aim is to make sure that these websites are located and have a better positioning in search engines.

Is JavaScript bad for SEO; is JavaScript evil? Not at all. It’s distinct from the familiarity of SEO specialists and requires a few steps to becoming knowledgeable.

People do have a tendency to rely on it far too much when there may be a better solution available, however, you have to make do with what you have at times. Be aware that Javascript isn’t flawless and it may not necessarily be the best choice for a particular task.

It cannot be processed step by step as HTML and CSS can, and can have a big effect on how quickly a page loads and how well it performs. In a number of circumstances, you may be exchanging productivity for usefulness.

Google processing pages with JavaScript

In the early days of search engines, one could download a HTML response to view the majority of webpages. Due to the increased popularity of JavaScript, search engines are now required to render webpages the same way a browser would so that the content is displayed in the same manner for a user.

At Google, the Web Rendering Service (WRS) is responsible for the management of rendering. Google has created a simple diagram to explain how the process works.

1. Crawler

The crawler sends GET requests to the server. The server sends back the headers and the data of the file, to be stored.

It appears probable that the query is likely to be coming from a mobile device, as Google places much emphasis on mobile-first indexing nowadays. You can utilize the URL Inspection Tool in Search Console to monitor how Google is indexing your website.

When running this for a URL, check the Coverage information to see if it is still indexed under desktop indexing or if it has changed to mobile-first indexing.

The majority of the requests originate from Mountain View, California in the United States, yet they also carry out some web crawling for pages that can alter to suit any location not located in the US.

Some websites may use user recognition to display content only to certain search engines. Google may have a different perspective on JavaScript websites than what a visitor does.

It is essential to use Google tools such as the URL Inspection Tool from Google Search Console, the Mobile-Friendly Test, and the Rich Results Test to properly diagnose any JavaScript SEO issues.

This is useful in determining if possibly Google is not able to access the page and the material it contains. It can demonstrate to you how Google views the web page.

It is essential to remember that although Google indicates the output of the crawling procedure as “HTML” on the picture, in reality, they are collecting and storing all assets needed in order to construct the page. Websites are composed of HTML documents, Javascript files, Cascading Style Sheets (CSS), XMLHttpRequests (XHR), Application Programming Interface (API) endpoints, and even more.

2. Processing

Many systems are camouflaged under the phrase “Processing” in the picture. I am going to discuss a number of issues pertaining to JavaScript.

Resources and Links

Google does not go from one page to another in a manner similar to how a person would. Part of the process involves examining the page to identify any links to other pages and documents which are needed in order to construct the page. The links are extracted and put into Google’s crawl queue, which helps the search engine in ranking and organizing the webpages it crawls.

Google will retrieve the resources (CSS, JS, etc.) required to construct a webpage from items such as tags. Google requires connections to other pages to be crafted in a particular way for treating them as links.

The tag should have an href attribute on it for both external and internal links. There are numerous techniques to create this service available to customers who do not have search-friendly JavaScript.

Good:

<a href=”/page”>simple is good</a>
<a href=”/page” onclick=”goTo(‘page’)”>still okay</a>

Bad:

<a onclick=”goTo(‘page’)”>nope, no href</a>
<a href=”javascript:goTo(‘page’)”>nope, missing link</a>
<a href=”javascript:void(0)”>nope, missing link</a>
<span onclick=”goTo(‘page’)”>not the right HTML element</span>
<option value="page">nope, wrong HTML element</option>
<a href=”#”>no link</a>
Button, ng-click, there are many more ways this can be done incorrectly.

It should be noted that internal links added with JavaScript will not be registered until the page is being rendered. It shouldn’t take too long and usually not something to be worried about.

Caching

Google will store all files it downloads, such as HTML pages, JavaScript files, and CSS files in an aggressive cache. Google will disregard the cache time limits and choose to retrieve a fresh copy whenever they desire.

Duplicate elimination

The HTML that is downloaded will have any duplicated material removed or deprioritized before it is sent to the rendering process. Using app shell models, only a tiny amount of coding and content will show up in the HTML response.

Every page on the website may display the same coding, and this identical code can be seen on multiple websites. At times, this may result in pages being identified as duplicates and not being displayed right away.

It could even be worse, with potentially incorrect pages or even entire websites showing up in search results. This concern should eventually go away, yet it can be a difficulty, particularly with more modern sites.

Most Restrictive Directives

Google will go with the most limiting regulations between the HTML code of a page and how it appears when it is rendered. If JavaScript modifies a declaration that does not agree with the declaration from HTML, Google will abide by the strictest one.

Putting a noindex tag in HTML will supersede any index tag, causing the page to not be rendered at all.

3. Render queue

Every page goes to the renderer now. Many SEO experts are uneasy about the prospect of JavaScript and the two-stage approach of indexing (HTML pages first, then rendered pages). The main fear is that it could take days or weeks before pages get rendered.

When Google conducted an investigation, it was discovered that the median loading time for pages to appear on the renderer was five seconds, while the slowest 10% took several minutes. The duration between retrieving HTML and displaying pages should generally be of no importance.

4. Renderer

Google utilizes the renderer so they can examine what a visitor would see when accessing a page. This is the location where the JavaScript code will be processed and any modifications that JavaScript makes to the Document Object Model (DOM).

Google’s web rendering service is adept at examining webpages and their associated data, including being without a specific state, denying access, creating a shadow Document Object Model (DOM), and flattening the light DOM. Displaying documents on the web is a complicated process which requires a large number of resources.

Google, on the other hand, is utilizing various quick methods to get tasks accomplished swiftly and in a shorter duration. There are other sites such as Ahrefs that provide huge page rendering. Ahrefs processes more than 150 million webpages daily, and also scans them to identify any JavaScript redirects.

5. Cached Resources

Google obtains rapid and effective outputs primarily due to the fact that it is heavily dependent upon accessing resources. Google stores files, webpages, API requests, and other data that is obtained.

They ensure the data is stored in the cache prior to sending it to the renderer for potential future use. They don’t get every page they load but instead use the stored resources to quicken their activity.

This method is not very dependable, as there are certain situations in which it does not work. The producing procedure can reach an unmanageable state where the index variation of the page still comprises remnants of the prior files.

Whenever you modify your files, be sure to give them new names so that Google does not mix them up with previous files’ information.

6. No Fixed Timeout

JavaScript SEO The Complete RundownA lot of individuals assume that browsers only take five seconds to load a page. This is not true. As stated earlier, Google retrieves information from cached files.

The renderer does not have any fixed timeout. It will keep attempting until there is no longer any network activity, at which point it will cease its action.

7. Crawl Queue

Google has to find a way to evenly examine your website alongside all other websites on the web. Therefore, it utilizes a tool named the crawl budget. There is a specific crawl budget for every website.

Google benefits from prioritizing the rendering procedure. Websites with lots of visuals or constantly changing content tend to be slower to index.

Testing and Troubleshooting of JavaScript

Sites that are constructed using JavaScript can only make changes to specific aspects of the Document Object Model. Individuals who are going from one webpage to another tend to not change many parts of the Document Object Model, such as the title or canonical tags.

However, it is not a big problem for Google. Google pages do not contain any definite data. Consequently, they are not taking any existing data into account and they are not visiting any of the pages.

It is a point of concern for several programmers when they switch between the pages and the canonical tags remain unchanged.

The History API can be employed to quickly alter the condition of the website. You can use Google’s evaluative instruments to observe how Google perceives your web page.

View-source vs. Inspect

Have you observed that when you click the right mouse button on a webpage, you can get choices like viewing the page source and examining it? Seeing the page source displays the same information that a GET request would reveal.

This can be described as the unprocessed HTML version on your web page. The inspect feature displays the DOM once all the changes have been applied. You can declare that the same information is presented to Googlebot.

The page is the newest or most up-to-date iteration. When dealing with JavaScript, it is recommended to use the inspect feature as opposed to viewing the source code.

Google Cache

You can never rely entirely on the Google cache. It does not produce the same results every time.

It is possible to view data about the initial HTML code and also about the HTML code after it has been processed and rendered. This was not created as a debugging utility and was created to display the material when a website breakdown happens or when your web page is unavailable.

Google Testing Tools

Google provides numerous effective tools for troubleshooting the JavaScript of your website. The Google Search Console contains the well-known utility, URL Inspector.

Although the tools don’t display the info exactly like Google’s crawlers do, they are highly successful in examining and troubleshooting the data. Keep in mind that these tools utilize up-to-date resources as opposed to exploiting cached versions of files like the renderer usually does.

These tools will present your information as graphical images that Google does not display in its rendering engine.

You can use tools to see if the content has been loaded into the Document Object Model. The tools can be advantageous when you attempt to locate the inaccessible resources and diagnostic error messages which can be beneficial for debugging.

Making JavaScript Website SEO Friendly

If you know the common SEO approaches and methods, then there should be no difficulty in grasping JavaScript SEO as the variations are fairly minor. Let us examine some of the crucial aspects of JavaScript.

On-Page SEO

The same rules that apply to non-JavaScript pages are still applicable to pages that have JavaScript.

It encompasses things such as titles, content, meta descriptions, meta robot tags, alternate attributes, and so on. But when talking about JavaScript, webpages, descriptions, and titles may be recycled. The alt tags assigned to the pictures are rarely employed.

URLs

If you want a higher ranking for your webpages, you should always adjust the URL when you revise the content. Recalling what was mentioned before, having the same file name repeatedly can lead to confusion, and data repetition will stop your files from being processed properly.

In JavaScript frameworks, a router links you to neat URLs. No hashtags are needed for the routing process. This problem usually came in earlier versions of Angular. Any text which comes after the hashtag symbol is disregarded by the server.

Duplicate content

JavaScript SEO The Complete RundownJavaScript can cause different web addresses to access the same content, resulting in issues of duplicate content. This may be caused by capitalization, IDs, parameters with IDs, etc. So, all of these may exist:

domain.com/Abc domain.com/abc domain.com/123 domain.com/?id=123

The solution is simple. Pick the edition that you would like to be cataloged and place canonical tags on it.

SEO “plugin” type options

These components are generally known as modules when it comes to JavaScript frameworks. To locate renditions for numerous frequently utilized structures, for example, React, Vue, and Angular, look for the structure + module name, for example, “React Helmet.”

Meta tags, Helmet, and Head are all widely used modules with similar capabilities that let you put in the most common tags that are necessary for search engine optimization.

Conclusion

JavaScript offers a great way to make your website more interactive and welcoming to visitors. Constructing pages that are well-suited for SEO and developed with JavaScript is relatively straightforward. You just need a bit more knowledge to adequately optimize your JavaScript webpages for search engines. If you’re well-versed in standard SEO, then you won’t find it a difficulty to approach fresh procedures and plans.

About the Author Brian Richards

See Brian's Amazon Author Central profile at https://amazon.com/author/brianrichards

Connect With Me

Share your thoughts

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}