Exquisitetouche - Technical SEO Audit
Updated: Jul 26, 2022
This report documents the technical site audit for the website "exquisitetouche" with issues found, recommendations, and why these are important.
The website (https://exquisitetouche.com) audited is a personal e-commerce website I built from scratch using a WordPress CMS. I haven’t sold anything from the site (only via social media handles), but I’m using it to practice and learn Tech SEO.
Primary goal: Getting my hands dirty/Practical application of Tech SEO.
Secondary goal: Get traffic, Improve conversion rates and make sales.
Tools used for analysis: Screaming Frog, Sitebulb, SEMrush, Google Search Console.
Observation: All the website domain variants resolve to the preferred secured version which is "https://exquisitetouche.com/"
Importance: It is vital that websites are on an HTTPS Protocol to ensure that their content is secure and private.
Observation: The XML sitemap is referenced in the robots.txt with a list of all indexable URLs. None of these important pages were blocked in the robots.txt, and there are no errors or warnings in the file. There were no host/availability issues encountered by Google while crawling. All the links to pages are implemented with an href attribute, can be followed, and are not broken links.
Importance: A website's content needs to be crawlable in order to be indexed and rank on search engine results pages. To ensure that these contents can be found and crawled by bots, include them in your XML sitemap and reference the file in your robots.txt. Also, ensure that the links to the pages can be crawled and followed, and the robots.txt should not disallow the crawling of important pages and resources.
Check out my post on crawling and audits for crawlability.
Observation: All the important pages are indexable, except ‘My Account page’ with a meta robots noindex follow. This means that search engines will not index this page but will follow the links on it.
Importance: Every important page on a website should be indexable so that users can find them in search results. For this website, it is ok to noindex the "Accounts Page" because it contains confidential information about customer accounts so they do not need to appear in search results.
Observation: While analyzing the Screaming Frog report, I observed that the URLs have self-referencing canonical tags, which is good.
Importance: Canonical tags help Google understand which page between duplicate pages, is the authoritative version to display thereby preventing duplicate content issues. Eg. In the picture below is a self-referencing canonical tag for the home page.
Observation: All important pages are internally linked through the site navigation in the header, footer, in-content links, and breadcrumbs, with relevant anchor texts.
The homepage links to category pages, while the category pages link to product pages. Homepage > Category > Products E.g.
Importance: Internally linking your pages through the website ensures that they can be found by search engine bots while crawling your website. It is important to ensure that those links are implemented with a href attribute, can be followed, and have relevant anchor texts that clearly describe the destination page.
Crawl depth for the most important pages like the products and categories are between 1 and 2. This means that they are just one or two clicks from the homepage.
This is a crawl map showing the locations of the pages as the crawler (Sitebulb) encountered them, alongside the click depth. The big green circle represents the homepage with a crawl depth of 0, the other colours represent URLs of different crawl depths (1 to 3).
Importance: Important pages should not be more than 3 clicks from the homepage. This is because the closer they are to the homepage, the higher their chances of being found, crawled, and indexed by search engines. Not only that, for a good user experience, ensure that users can access pages on your website within 3 or 4 clicks from the homepage.
Technical SEO is all about optimizing a website to enable search engine crawlers to access, crawl, index, and rank the contents on the site.