If you’re building a website or web app using JavaScript you should take a few basic steps to make sure your content is discoverable via search. Let’s look at a few SEO techniques to help users find your content.
All of your pages should have a descriptive and helpful title that describes what the page is about in very short terms. Each page should have Meta Tags (Meta Title and Meta Description) of what the page will contain specifically. Note: Avoid Generic Titles.
You can check your pages for those tags by using the right-click inspect and then search for “//title” and “//meta” to find them. If you do not see all of your content in the markup you are probably using javascript to render your page in the browser. This is called client-side rendering and is not a problem.
Rendering is the process of populating templates with data from api’s or databases. This can happen either on the server side or on the client side. When it does happen on the server crawlers as well as your users get all the content as HTML markup immediately. In single page apps, the server often sends the templates and Java scripts to the client and the JavaScript then fetches the data from the back-end populating the templates as the data arrives.
For single page apps another important detail is to allow Googlebot to crawl pages from your website by linking between your pages properly make sure to include useful link anchor text and use the HTML anchor tag with the destination URL of the link in the href attribute. Do not rely on other HTML elements, such as, div or span or use javascript event handlers. Only crawlers will have a trouble finding and following these pseudo links, they also cause issues with assistive technology.
Links are an essential feature of the web and help search engines and users find and understand the relationship between pages. If you are using javascript to enhance the transition between individual pages, use the history API with normal URLs, instead of the hash based routing technique. Using hashes also called fragment identifiers to distinguish between different pages is a hack that crawlers to ignore.
Using the JavaScript history API on the other hand with normal URLs provides a clean solution for the same purpose. Googlebot will be visiting your pages individually, so neither a service worker nor the JavaScript. Using history API can be used to navigate between pages test what a user would see by opening your URL’s in a new incognito window, the page should load with an HTTP 200 status code and all the expected content should be visible.
Using semantic HTML markup properly helps users better understand your content as well as navigated quicker. Assistive technologies like screen readers and crawlers also rely on the semantics of your contents. Use headings sections and paragraphs to outline the structure of your content using HTML image and video tags with captions and alt text to add visuals. Following these steps will help Googlebot understand your content better and make your content more discoverable in Google search.