7 Technical Issues that Kill SEO and How to Spot Them

Without strong technical optimization, the rest of your search engine optimization (SEO) is worthless. Unless search bots can crawl and index your site, they can’t rank your content in search results. 

Understanding technical issues in SEO goes far beyond running a site audit in Semrush or doing a crawl in Screaming Frog’s SEO Spider. While those tools are useful in auditing a site’s technical SEO, you need to dig deeper to uncover issues that truly impact a site’s ability to be crawled and indexed. As you’re using these tools, and the many others that can help diagnose technical SEO issues, audit your site for these seven technical issues that can kill your SEO efforts.

Blocked Crawl Path
1. Robots.txt File

The robots.txt file is the ultimate gateway. It’s the first place ethical bots go when they crawl your site. A simple text file, the robots.txt file contains allow and disallow commands to tell search engines what to crawl and what to ignore. A mistake in the robots.txt file can accidentally disallow an entire site, telling bots not to crawl anything. To find a full-site disallow, review the robots.txt file — found at https://www.yoursite.com/robots.txt — and look for the following:

User-agent: *

Disallow: /

To find other errors in the robots.txt file, use a checker like Merkele’s Validator. When you enter a URL, the validator displays the robots.txt file and tells you whether the URL you entered is allowed or disallowed.

Blocked Indexation
2. Meta Robots Noindex Tag

Found in the head of a page’s rendered HTML, this meta tag commands search engines not to index the page it’s found on. It looks like this, in practice:

<meta name=”robots” content=”noindex” />

The only way to find pages blocked by the meta robots noindex tag is to run a crawl of the site using a crawler like Screaming Frog’s SEO Spider or to go to Google’s Search Console’s Pages reports and click on the “Excluded by ‘noindex’ tag” report. Note that Google Search Console will only show you a sample of 1,000 pages affected, so if you have a large site, you may not be able to analyze the issue completely that way.

3. Canonical Tag

Considered by Google as a suggestion, canonical tags are intended to identify the desired page to have indexed among a set of duplicate pages. By choosing a canonical tag for each page, you can ask Google not to index URLs with tracking parameters, etc. canonical tags are found in the head of a page’s rendered HTML code and look like this:

<link rel=”canonical” href=”https://www.yoursite.com/sample-url” />

However, sometimes canonical tags go wrong and request that Google not index something that we actually do want in the index. The best way to identify mis-canonicalized URLs is to run a crawl of the site using a crawler like Screaming Frog’s SEO Spider and compare the canonical URL to the URL crawled. You can also review Google Search Console’s reports for “Alternate page with proper canonical tag” and “Duplicate without user-selected canonical.”

Missing Links
4. Malformed or Missing Anchor Tags

There are many ways to code a link that your browser can follow, but only one way that Google says it will follow for sure: using an anchor tag with an href attribute and a qualified URL. For example, a crawlable URL looks like this:

<a href=”https://www.yoursite.com/sample-url”>This is anchor text</a>

Some examples of URLs that Google won’t or can’t crawl are shown below:

You’ll need to view the source code of the page to identify whether links are coded with anchor tags, href attributes, and qualified URLs.

5. Pages Orphaned by Bad Links

Pages that you want to rank need to have crawlable links to them. Just including a URL in the XML sitemap might get it indexed, but without a crawlable URL on the site that points to that page, it’s far less likely to rank. And as shown above, that URL needs to have an anchor tag with an href attribute and a qualified URL. When bad links are the only way to access a page of content, it looks like it’s orphaned to Google because it can’t follow the link to the page.

For example, pagination on an ecommerce category page or a blog is frequently uncrawlable due to bad links, meta robots noindex tags, canonical tags, robots.txt disallow commands, or in rare cases, all of the above. The combination of missing crawlable links and signals not to crawl or index content means that product or blog pages linked to only from paginated pages can look as if they’re orphaned. To diagnose this issue, analyze the links in the pagination first, and then look for the other signals mentioned on page 2 of the paginated set.

Mobile SEO Issues

As of July 5, 2024, Google will complete its mobile-first index plan begun in 2016, and will only crawl sites with its Googlebot Smartphone user agent. If your site doesn’t load on mobile devices, Google won’t be crawling or indexing it. Even if it does load, the mobile version of many sites can be less fully featured than its desktop counterpart. That can be an issue for technical SEO.

6. Mobile Navigation Issues

Navigation is a wonderful way to pass link authority because every page on your site links to every page that’s linked to in the header and footer navigation. That means that every page in the header and footer gets a little vote of relevance and link authority by virtue of living in these sitewide navigational elements. But mobile navigation can be different from desktop navigation, often featuring fewer links for “usability” reasons. 

Take, for example, a fashion ecommerce site like L.L. Bean that has rollover navigation to the 4th level. They’re doing it right: You can navigate from any page on both desktop and mobile down four levels into the site’s navigation, such as Clothing > Women’s > Shirts & Tops > Flannel Shirts. If you take a minute to look at their nav on your mobile device and compare it to their desktop navigation, you’ll see that it contains the same links to the same content. 

Analyze your own site’s navigation similarly. If you don’t have consistent or better navigation that links as deep or deeper into your site on mobile, then Googlebot will not be able to pass internal link authority as deeply into your site as it used to when it crawled via the desktop site. Your lower-level pages will have less visibility and less chance of ranking. 

7. Mobile Content Issues

In the same way that mobile versions of the site can have fewer navigational elements, designers also tend to leave content elements off of the mobile version of the site to streamline the experience for those users. However, with Googlebot crawling only as a smartphone now, that means that Google won’t see and index the whole array of content available on desktop. Analyze the pages of your site on a mobile device to be sure that the same content elements are available on mobile as on desktop.

There are many more technical issues that can crop up in SEO, but these are the ones that I find most commonly that are the most damaging to organic search performance. Audit your site today for these seven SEO issues. Then you can be sure that all of the content you want to have crawled and indexed can be and that you’re eligible to rank and drive traffic to your site.

About the Author:

Share This Resource, Choose Your Platform!

Join the JumpFly Newsletter

Get Our Marketing Insights Right To Your Inbox

    Schedule a Call

      Fields containing a star (*) are required


      Content from Calendly will be embedded here