Last month, I wrote about what technical SEO is and why it’s important. If you missed it, you can find it here, but do come back to read this next instalment!
This month, we are looking at the three most common issues that crop up in our technical SEO audits. We’ll look at what these issues mean, why it’s important to address them and give a brief overview of how to fix them.
Inefficient caching of static assets
What does this impact?
When static assets are required by a page, the page in question has to fetch the assets over the network. By specifying a cache header, you can ensure that repeat visitors do not need to redownload the resource files because the browser has already cached them (unless the user clears their cache).
Including a cache header to specify how long a browser can keep static assets will only affect repeat visitors, but remember that repeat visitors are often the most engaged. And of course, where your product or service has a longer consideration phase (for example, a big purchase such as a greenhouse or a heat pump), the user may require several visits to your site in order to complete the purchase or enquire for a service. Therefore, ensuring that the page loads quickly on any subsequent visits is likely to be beneficial.
How do you fix inefficient cache policy issues?
It isn’t difficult to implement a cache policy header to your pages (it’s literally a line of code added to the resource header), but you may wish to ask your developers to look at individual resources to determine the correct cache header for each resource. This is because some static assets you may have might need to be called each time the page loads. Your developer is the best person to identify the assets that can be cached and those that should not be.
To aid developers in this endeavour, we have put together a decision tree for this purpose:
What does this impact?
Render-blocking resources can have a significant impact on site speed and, in particular, on Core Web Vital metrics (which are now included in the ranking algorithm), and as such can have a direct impact on rankings. Unlike the cache header issue explained above, this issue will impact all visitors to the site – both new and returning.
How do you fix issues with render-blocking resources?
The best method for fixing this issue will depend on what is causing the problem – if it’s a CSS file that is blocking the initial render, your developer may be able to include the CSS code directly on the page, rather than calling it from a separate stylesheet. Anything that is non-critical for the first render of the page can be deferred and, as a last resort, anything that isn’t actually used can be removed. However, this step should be taken carefully and with consideration to what other pages a resource may be called from, and whether there is code that is needed for another page – the last thing anyone wants is to fix one issue while creating new ones to be addressed.
For these reasons, we generally prefer the developer who worked on the site initially to identify how to proceed with this, as they are in the best position to understand which resources are shared by more than one page and what can safely be removed without causing further problems for you.
Broken internal links
Internal links are how users find other pages in a website. The search engines use them too when crawling websites (it’s how they crawl – they follow the links). If a link is broken, the user (or search engine) will usually be presented with a 404 ‘Page Not Found’ error (you get extra UX points if you have a custom 404 page, with links back to your most important pages). One or two broken links is probably unlikely to make a lot of difference either way to rankings, but the more you have, the more likely this is to be an issue that is holding your site back from good rankings.
What does this impact?
This impacts usability – if users are clicking links and landing on your 404 page, instead of whatever it was they were after, they will be left with a negative feeling about your website. If the search engine robots encounter these issues, then it can lead to reduced crawling and de-indexing of content.
How to fix broken links?
Broken links are usually caused by one of two things (and sometimes both combined):
– A typo in the URL you intend to link to
– The linked to URL has been deleted or moved to a new URL
Once you know why the link is broken, you can fix it. If it’s a typo, you can simply ‘fix’ your typo. If the URL that was linked has now been deleted, you will need to figure out a couple of things – was the page deleted because there is now a better one to cover the subject? If so, update the link to the better version. If the page was deleted and a new version doesn’t exist, then you should remove the hyperlink code and not link from that page or place on the site at all.
If a page has moved, then it would be worthwhile adding a 301 redirect (if the change is permanent) or a 302 redirect (if the change is temporary), so that anyone who has bookmarked the page can still reach a version of it on the new URL. This also helps the search engines to understand that a page has moved. Ideally, however, you don’t want internal links passing through a redirect as this can slow the site speed down. So, whilst a redirect is a good ‘sticking plaster’ as a first course of action if a link is broken, it is best to go back and update the link as well so that the link goes directly to the new page instead of passing through a redirect. Sometimes, I will create redirects first so that users and search engines can still navigate to the content and go back later to update the links. This allows the search engines to find the redirects and understand that the URL has moved (the old URL will drop out of the index and the new URL will replace it) which can be beneficial.
I have focused mainly on broken links that send users to a 404 page – there are other causes of broken links which include:
- 403 ‘Forbidden’ – this means the user does not have permission to access the page – it could be content behind a paywall or a user login area, for example. Alternatively, there may be a server configuration issue – if we’ve been able to crawl most of your site but some are Forbidden, it could be caused by the server blocking crawl requests coming from our IP address. This has never happened to me before, but it’s a possibility!
- 5XX Server Errors – these occur from the server and require the developer or a server admin to resolve for you. We will always help to identify the problem but fixing it is usually dependent on many variables (the server and the CMS you use), so is usually best addressed by your developer.
- Timeout errors – these occur when the crawling tool is taking too long to gather data from a page. This is usually indicative of a poorly configured server or underpowered server, or the page requested is slow loading due to large or complex database queries. Therefore, whilst this issue isn’t a ‘broken link’ as such, it does warrant further investigation on our end to identify why the page is slow loading enough to cause the crawling tool to time out. With my retainer clients, I tend to crawl their websites regularly to identify any broken links to be fixed, as these are relatively common and can usually be fixed very quickly.
These are the top three most common issues we’ve encountered whilst auditing websites for technical SEO. When performing a technical audit, we will typically check for over 300 different potential issues, and will report on any we find on your site. We will always provide as much information to the developers as we possibly can to help them fix the issues raised.
Is it time you had a technical SEO audit conducted on your site? Get in touch with us now!