However, this technology has disadvantages, adding a new layer of complexity when it comes to guaranteeing a proper scanning, indexing, and positioning on search engines.
In the first case, it is sufficient to restrict the areas in which Googlebot still shows limitations and implement the proper measures. In the second case, the necessary interventions could be more important, because it has to be reconsidered how the indexing process of contents happens.
- Lazy loading: The lazy loading is a loading technology with delayed content, especially images, which is used to improve the speed of a website. However, not always the images uploaded through lazy loading can be indexed by search engines; to ensure that Google can index the delayed uploaded contents, it is possible to:
- Add the <noscript> tag to each image
- Use structured data (schema.org/image)
- Use an API Intersection Observer
- Status code
- Meta robots
- Tag Title
- Text-base content (main content)
- Structured data
The latest developments: Googlebot and Bing are now evergreen
Recently, both Google and Bing announced the update of the browser used for the rendering. In the case of Google, it went from Chrome 41 (2015 to Chrome 74 (2019) in May 2019. The representatives of both search engines stated that from now on, the rendering engine will be the subject of regular updates in order to be constantly aligned to the last version of Chrome.
Not really. Before giving for granted the rendering capabilities of engines, it is important to carry out the appropriate controls and verify case by case if any limitation exists. Especially, it is important to remember a fundamental concept, linked to the bot’s performance: the crawl budget or more precisely the render budget, which means the set of resources allocated by Googlebot to scan a specific website.
The various types of rendering
To avoid this issue, it is necessary to make sure that the rendering expense won’t lie entirely on the client, in this case, Googlebot. To reach this result, it is possible to implement different solutions, such as:
- Dynamic rendering: this approach consists in providing to the bots – identified through User-Agent – the rerouted content, and to the users (browser) the “normal” content to be rendered client – side.
It is an actual change of Google’s policies, because it is carried out an activity of cloaking, a practice that seemed to be free of penalization.
Along the recommended open source solutions to implement the dynamic rendering we can find Renderton and Puppeteer, while the most used third-party tools are Prerender.io, SEO4Ajax, SnapSearch, and Brombone.
Even if the dynamic rendering is certainly the easiest alternative, Google has recently stated that it has to be considered only as a workaround, which means a temporary solution to be used till one’s web app is updated to support the server-side rendering or hybrid.
If there are solutions to possible issues linked to the double indexing wave, it is also true that certain interventions are expensive in terms of money and/or effort from a development perspective. Is it always worth it?