Getting your website to appear in a privileged position in search engines like Google, Bing or Yahoo! is essential to get traffic. There are many keys to improve this positioning, but before promoting content on social networks, analyzing search trends for keywords or launching into link building, it is important to know that there is job that we must do before on our site. This set of tasks is what we call Technical SEO.
Among the tasks of improvement grouped under SEO On Page, we can distinguish two types: those that are focused more on content generation and those that focus on more technical aspects that seek to organize and highlight this content in the eyes of search engine crawlers. That’s why we call it Technical SEO. We’re talking about things like, for example, site architecture, code improvement, metadata, URL generation, or performance optimization. At OKB Interactive Studio, we take care of all these issues when we design a website for our clients. And we’re going to talk about all of them in this article about Technical SEO. The other part of SEO On Page (how to generate content to improve positioning) will be addressed in a future article. Shall we start?
How to design good website architecture
Without a doubt, one of the parts of Technical SEO that most helps improve the positioning of a site is good web architecture. If, when designing a house, it’s essential to distribute space well into rooms according to their use, the same happens with a website: logically separating content into clearly differentiated blocks according to their utility will not only make it easier for visitors to find what they need, but also search engines will correctly interpret the purpose of the site and each of its pages. Just like if we’re looking for the oven, we’ll go to the kitchen, and if we enter a bedroom, we expect to find a bed.
Defining these blocks and what type of content will go in each one of them depends on each site and the strategy to be followed. One way to approach it is from the user’s perspective. It’s logical that a site deals with something specific and is directed at a specific audience (your buyer persona). But the same issue can be approached from different angles depending on the state of each client and their objectives. For example, if I dedicate myself to selling books, there will be customers who are looking for a specific title, but there will also be others who simply want a book and don’t know which one yet. What’s more, there may even be customers who are still in a much earlier phase and don’t know they want a book, even though it’s just what they need (because they’re looking for a Valentine’s Day gift, for example)…
Once the different blocks of the site are defined, they must be ordered by levels. There are three levels: home, sections, and secondary pages.
The home is the most important page. Its content is a summary of the site and it is usual to try to position it with the most general keywords that well define the purpose of the website and that bring more searches.
After the home are the sections, which represent each of the blocks into which the site has been divided. Each of these sections specializes in a keyword that collects more specific searches that do not cannibalize each other.
And finally, we come to the secondary pages, which depend on the sections and serve to position themselves in very specific searches.
It is advisable that the pages are not placed at a greater depth than this —at least the pages that we want to position well— since search engines usually only index the first three levels of a website because that’s where they understand the most interesting content is.
To finish this section about the site’s architecture, we need to look at how pages will link to each other. This is what we call interlinking, and it’s very important for SEO positioning.
Typically, we link from the homepage (first level) to all the sections (second level). Then, we need to link each section to all its dependent secondary pages (third level). Each of these pages should then be linked among themselves and also to the parent page to create a block feel. However, it’s not advisable for secondary pages to link with other sections or secondary pages from different sections.
Lastly, we should aim to give pages we want to highlight more relevance by providing more internal links from other pages, especially from the home. So, from the home, you can also directly link to a secondary page.
How Metadata Can Improve Positioning
Aside from the visible content on a web page, the unseen content, such as metadata, is also important for improving its positioning in search engines. Correctly using metadata helps search engine crawlers better understand our content, and it also allows us to control how we will appear in results pages, which significantly influences the volume of clicks we will receive.
The Title and the Description
In terms of SEO positioning, the title tag is possibly the most important of all the meta tags we are going to discuss. The title is the most important text of all work: it has the power to engage or deter someone from its content, and search engines are aware of this.
This title of a web page should clearly summarize the purpose of the page and include the keywords we aim to rank for, preferably closer to the beginning. Moreover, you need to be able to synthesize well because space is very limited: the recommended length for a web page title is between 50 and 60 characters.
On the other hand, the description tag contains a 150-character summary of the web page, serving as a support to the title. That’s why it’s ideal for the title and description to be conceived as one entity.
The title and description of a web page are crucial because, along with the URL, they are the elements that search engines display on their results pages. A title and a description that closely align with the search will result in the page recording more clicks, which is ultimately the aim of achieving a high ranking—to garner more clicks.
Open Graph Tags
The implementation and optimization of Open Graph Protocol meta tags do not directly impact SEO positioning. However, they are typically included within any Technical SEO plan because they help increase the engagement of pages, which Google considers when evaluating them positively to display in their rankings.
Thanks to OG meta tags, social networks construct cards when someone shares the URL of a page on them. If you need more information on how to implement OG meta tags on your site, we’ve already discussed this on our blog.
How to Create ‘SEO Friendly’ Code
Although this section can also be approached from the content SEO perspective, it is, in our opinion, more related to development. Building a website with solid, standardized, and semantic code helps improve positioning because Google pays attention not only to a page’s content but also to how it’s built.
Heading tags are the titles that precede each of the different content blocks on a webpage and are extremely important because Google’s crawlers thoroughly crawl them in search of information to catalog your content. That’s why copywriters usually use these titles to include the keywords they want their content to be associated with.
In HTML language, there are six types of headings, ranging from H1 to H6. However, in the eyes of the great SEO god, we can say that only 1 to 4 are relevant.
H1 Tag: As you might expect, it’s the most important Heading of all, and therefore, search engines give its content the most value. The H1 is equivalent to the title and should succinctly summarize the essence of the page’s content. Likewise, just as the title is the first thing we usually encounter when reading a book, it’s important for the H1 tag to also maintain this privileged position on the webpage: it should always appear as soon as possible within the body and in a shallow node if we want Google to interpret it correctly. Another characteristic of this header tag is that —unlike the rest of the heading tags we’ll discuss- it can only be used once on each page.
H2 Tag: Although the H1 tag is the most important of all, we could say that the battle is decided in terms of SEO positioning with the H2 tags. It’s true that their content doesn’t carry as much weight as the H1 tags, but search engines use them as references to divide the pages into sections, and this is vital for interpreting the content. It’s advisable for the H2 tags to contain synonyms of the page’s main keywords or keywords related to the main theme and directly reference the content that follows. Therefore, avoid using them to identify blocks that are not specific to the main theme of the page.
H3 Tags: The H3 tags serve to represent the various subsections that a main section (H2) is divided into. Their content loses strength to identify the general content of a page but, on the contrary, gains weight as it is understood to be more specific content that addresses the main topic from a concrete perspective.
H4 Tags: Finally, H4 tags are used to divide subsections (H3) into blocks. We usually use them as titles for a list of bullets.
H5 and H6 Tags: In terms of SEO, the use of H5 and H6 tags doesn’t impact a page because its value is similar to a normal paragraph. But that doesn’t mean they can’t be used…
Semantic HTML is a type of coding that introduces meaning to the webpage rather than just presentation. In HTML, there are generic tags (like
<span>) that do not provide any information about their content. However, there are also tags that give search engines clues about the function or intention of those code blocks within the page. Other examples of non-semantic tags are
<i>, which only define how a text should be displayed —bold or italic— but without providing any additional meaning.
On the other hand, there are tags that search engines immediately recognize and analyze them from another perspective. This is the case for
Breadcrumbs are a navigation element that lists the steps from the start page of a website to the page the user is currently on. They are like a trail that is left as one navigates through a website to easily return to where one was before (remember the story of Hansel and Gretel by the Grimm Brothers?). Breadcrumbs can help search engines better understand a website, and their use tends to work very well in terms of SEO.
If you improve performance, you will improve positioning
This is one of the most outstanding points of any Technical SEO program: the reduction of the loading time. Search engines take speed into account when ranking content better or worse on their result pages. Moreover, if the pages of a site take a long time to load, Google’s crawlers can ban them and refuse to index them.
Improving performance is laborious work that requires multiple actions. In this blog, we have already discussed in the past how to improve the loading time of a website, so we encourage you to take a look at some of the recommendations we gave you at that time.
It is very important to pay attention to how the URLs of the pages are created. The URLs should be:
Friendly: It is always advisable to generate short URLs that are easy to read, write and even memorize; that do not contain strange characters or parameters, that do not show the extension (.html, .php…) and that instead of spaces use hyphens between words. It is important that they always include the keyword by which the page wants to be positioned, always as close to the domain as possible. URLs should also have little depth.
Canonical: The URL of our studio’s website is https://okbinteractive.studio/. However, it can also be accessed from https://www.okbinteractive.studio/. From the point of view of a search engine, the three ‘w’ function as a subdomain, so both addresses are two different URLs that contain exactly the same content. And duplicate content is one of Google’s main penalties. To avoid this, it is very important to define a canonical URL in the
<head>of each of our site’s pages:
<link rel="canonical" href="https://okbinteractive.studio/">.
Secure: Secure Socket Layer (SSL) certificates are used to protect user data. Although initially their purpose was to prevent the leakage of confidential data such as credit card numbers, currently all search engines consider it as a key factor in assessing the quality of pages, so it is very important to have this certificate to improve SEO positioning even for websites that do not include credit card transactions or store confidential data. But there are still more reasons to have an SSL certificate. The URLs of websites that have this protocol are preceded by an “https” —instead of the classic “http”— in front of their address. For several years now, the main browsers (Chrome, Firefox, Safari, and Edge) include a warning message when trying to access a “http” page. And who is going to want to access a website with a message in their browser’s address bar that says “This site is not secure”?
Make it easy for crawlers
Every Technical SEO usually includes the configuration of two files: a sitemap and a robots.txt.
A sitemap is an XML file in which all the pages of a website that should be indexed by search engines are listed. The use of sitemaps usually speeds up the page indexing process, so it is very useful in terms of SEO. It is advisable to always place it at the root of the site. Sitemaps are structured in blocks in which four values are specified: the URL, the frequency with which that page changes, the priority of the page within the website, and the date of the last modification.
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>https://okbinteractive.studio/</loc> <changefreq>daily</changefreq> <priority>1.0</priority> <lastmod>2022-01-28</lastmod> </url> ... </urlset>
On the other hand, adding a robots.txt file also at the root of a website serves to indicate to Google’s crawlers which contents they should index and which ones they should not. This is very useful for SEO, as the crawlers will not waste time indexing pages that we are not interested in positioning and will focus on the really important ones for our business.
Both files (sitemap.xml and robots.txt) are very easy to configure and you can do it from Google Search Console.
Use hreflang tags correctly
If your site is only in one language, this point will not be very interesting to you, but if on the contrary, your website is multilingual or multi-country, you probably have different versions of the same page in which the content variations are minimal. This is considered duplicate content to search engines, something that penalizes the positioning of your pages. To avoid this ‘misunderstanding’ by crawlers, fortunately, there are hreflang tags.
For example, if your site is in three different languages (let’s say English, Spanish, and French), you need to add the following hreflang tags in each of the variants:
<link rel="alternate" href="https://yourweb.com/" hreflang="x-default"/> <link rel="alternate" href="http://yourweb.com/es/" hreflang="es"/> <link rel="alternate" href="http://yourweb.com/fr/" hreflang="fr"/>
As you can see, these tags consist of 3 attributes. The first two are rel (which we always mark as alternate to indicate to the search engine crawler that our website has linguistic and/or regional variants) and href (where the URL of each of the page variants is specified).
Finally, we have the hreflang attribute, which gives the tag its name and is used to indicate the language of the page and the country it is aimed at. It consists of two parts: the first - mandatory - is a two-character code in which the language is specified in ISO 639-1 format, while the second - optional - is also two characters (can be in uppercase or lowercase) that represent the country in ISO 3166-1 format.
The country code is useful in certain cases. Let’s imagine, for example, a German company that also sells in Switzerland and Austria. For each country, it has its own online store, as it applies different prices, a different VAT rate, different sales and delivery conditions, and manages its own stock… The three websites are in German and share most of the content, but each one is focused on a specific region. This way, the content will not only not be categorized as duplicate, but search engines will also display one or another version of the site in their result pages based on the location from which the query was made.
<link rel="alternate" href="http://germanonlinestore.de/" hreflang="x-default"/> <link rel="alternate" href="http://germanonlinestore.at/" hreflang="de-AT"/> <link rel="alternate" href="http://germanonlinestore.ch/" hreflang="de-CH"/>
The x-default value is used to specify the original version or the one that is generally used without being associated with any specific market. But it is always recommended to use the x-default value in one of the hreflang tags.
It is also important that all variations of a page contain a link to the other variations so that search engines can verify that it really is alternative content and not repeated content.
Keep an eye on HTTP status codes
Status codes are a series of three-digit values returned by the server where a site is hosted after receiving a request to access a particular page. If everything is working correctly, the server will return a 200 status code, and search engines love this. However, sometimes the content of the page cannot be returned: then the response is a code of type 400. And Google and its colleagues get mad…
Of the 400 errors, the most typical are the 404 and the 410. Both mean that the server could not find the requested page: in the first case it is considered to be a one-off error and in the second it is assumed to be definitive. Technically speaking, the fact that a site generates many 404 and 410 errors does not penalize its SEO positioning. But in practice it does, so it is very important to keep an eye on them. To begin with, a large number of 400 errors usually means that search engines ‘punish’ by decreasing the frequency with which they index the pages of a website. In addition, pages that return 404 on a recurring basis are eventually removed from search engine indexes, while pages that return 410 are removed immediately. Finally, 400 errors cause the authority of the source page to be lost.
In between the 200 and 400 codes are the 300 codes, which are used to indicate redirects. Code 301 means that it is a permanent redirect (i.e. the page has been moved to a new URL) while codes 302, 303 and 307 are temporary redirects. Permanent redirects will be indexed and will receive the authority of the page from which they are linked. On the other hand, if the redirect is temporary, Google will neither index the page nor pass any authority to it.
As we have seen, although the focus is often placed on content when we think about SEO positioning, the truth is that Technical SEO plays a fundamental role. That is why it is so important that a site is SEO friendly and pays attention to all the aspects we have listed in this article. And if doing all these tasks is complicated for you, OKB Interactive Studio can help you with it. Do not hesitate to contact us.