Growth
Growth, Marketing, SEO, StitchDX, Technical SEO

What is Technical SEO? The Essential Basics for SMBs

This post was originally published on the StitchDX blog.

___

My last few blog posts covered a topic near and dear to my content marketer’s heart: on-page SEO and SEO meta attributes. Now I’d like to introduce you to the technical SEO basics. Get ready for an introduction to the “under the hood” stuff that significantly affects the performance of your website — and by extension, its search rankings.

Technical SEO: A whole other language

With the right tools at hand (WordPress plus the Yoast plug-in is my choice and recommendation), a writer in partnership with an SEO strategist — or a writer well-grounded in SEO principles — can generate content that’s well optimized on-page for search.

On the other hand, technical SEO is by and large the domain of your website developer or dev team. They should be fluent in fine-tuning your site’s code for optimal technical performance. Unless you’re an HTML ninja, they should be the only people touching most of this work. Talk to them (or us) if you have any questions or concerns with your website’s technical SEO.

Technical SEO basics: The essential website attributes you should be optimizing

As you’ll read, each of these basics does its part to help your website put its best foot forward for search engines.

Technical SEO basic #1: Searchability.

Call me Captain Obvious, but Google, Bing, et. al. need to be able to search your website if you expect them to rank it. Take these actions to help search engines do their work to your benefit.

Be thoughtful about your website’s structure.

The easier it is for you (or other human beings) to understand the structure of your website, the easier it will also be for search engines. Aspects of an optimized site structure include:

  • A clear hierarchy of pages, with well-defined content categories and relevant supporting sub-pages. (On our website you’ll find main category pages — Digital Marketing and Digital Workplace, each supported by subpages covering more specific topics.) If you find you’re creating an unwieldy amount of content for a particular category, audit that content to see if any of it is worthy of a new, relevant category.
  • Internal/external linking and URLs that are named to reflect and follow that hierarchy, to streamline the robots’ journey through your site.
  • No dead links. They’re as frustrating as hitting a dead end when you’re driving. On unfamiliar roads. And you’re running late.
  • Balanced content across topics. This pertains to your basic website structure, but especially to your blog. (On the StitchDX blog, we strive to maintain a 50/50 balance of posts covering Digital Marketing and Digital Workplace topics.)
  • Current content, with no outdated pages or duplicate content. Conduct regular content audits to see where updates, deletions, and redirects are called for.

Welcome the robots (but steer them where you want them to go).

Search engine robots (or “spiders” or “crawlers”) enter your website intending to scan and rank every page. However, your developers CAN code your pages to make the robots do as much as possible to get the search results you want:

With the robots.txt file, you can tell search engines which pages you do and don’t want them to crawl. Why would you want to steer robots away from any of your pages?

  • If your site is quite large (e-commerce sites, for instance) search engines will allocate a “crawl budget”: A limited amountof time it will spend with you. You and your developers can strategize the use of robots.txt to steer robots to your hottest, most competitive products and away from, say, your “Contact Us” page.
  • You and your dev team may also be trying to fine-tune (or overhaul) some key pages on your site. The robots.txt file can keep crawlers off those pages till they’re ready for prime time.

Then there’s the robots meta tag, which more precisely circumscribes the actions that crawlers can take at the page level.For instance:

  • You can allow robots to crawl a page, but not list it on SERPs (Search Engine Results Pages). This is the “noindex” command, and it’s a snap to set up on the Yoast WordPress plugin.
  • You can also instruct (“nofollow”) the robots to ignore all the links on particular pages, or just specific links. Again, Yoast makes it easy at the page level.

But… WARNING! DANGER, WILL ROBINSON!!!

This isn’t amateur hour — robots.txt and the robots meta tag are strictly the domain of back-end web development experts. Attempting to DIY this work can result in catastrophic results for your website.

If you see opportunity in these coding options, partner with your development team (Don’t have one?Let’s talk.) to weigh the potential benefits and create a holistic strategy.

___

StitchDX logo

 

 

 

 

 

 

Upcoming Events

Share

Related Articles