It’s fair to say that good old-fashioned link building should be a massive part of any website’s SEO strategy. Well placed, relevant links can do wonders for a the search engine ranking of a particular site. But all your link building efforts could be massively hindered if the basic principles of onsite SEO are ignored.

Think of the onsite SEO as the foundations of a house, with inbound links as the house itself. Without a strong base to work from, the structure will never be as strong as it could be, no matter how much you build on the foundations.

But there’s always time to change your ways. And it’s not as hard as some people would have you believe. This guide is here to help you out with all the easy to implement basics that every website should be getting right.

Redirects and Status Codes

Whilst Google does not explicitly say that redirects can have a negative impact on rankings, it is the case that redirects can reduce the value of any links that are pointing to the original site. For example if someone types in your url – www.biebers-shame.info – and it automatically redirects to a different url – www.biebers-shame.info/homepage – then any links pointing to the first url could be slightly devalued in the eyes of major search engines.

Try to remove unnecessary redirects as much as possible. Also, it’s important that you get to know your status codes. Here’s a brief run down of the status codes that are most frequently encountered when surfing the web;

200 OK

Everything is fine, the page has loaded normally and there’s nothing to worry about.

301 Redirect

This means that the page has been permanently moved to a new location. Most of the value of any links pointing to the original domain have been retained. This is the ideal code for a site that has been permanently relocated

302 Redirect

This means that the page has been temporarily moved to a new location. Most of the value of any links pointing to the original domain have been retained, although less than with a 301 redirect (in most situations). This is the ideal code for a site that has been temporarily moved, for instance during a site rebuild or update.

401 Unauthorised

This code means that identification is required before the site can be accessed i.e. username and password.

403 Forbidden

This url cannot be assessed by users or crawled by spiders. And if you don’t know what a spider is, go look it up…

404 Not Found

This means that this url has no content. Alternatively, this code can also be used instead of 401 and 403 errors, when the site owner does not wish to reveal exactly why the site cannot be accessed.

Canonicalisation

When a website can be accessed by multiple variations of the same url, it can spread the value of the links pointing to the different which could impact on rankings. The issue is particularly common with dynamic websites. For instance, while the pages http://www.johnnyrockets.com and http://johnnyrockets.com are technically the same site, a search engine could see them as competing sites and even make the mistake of removing the site from rankings due to perceived duplicate content issues.

The value of links pointing at the two urls will be split between them, not rewarding the “mother” SERP (search engine result page) rankings as much as they should. The same applies to any vanity urls that may have been purchased.

By far the easiest way to avoid any Canonicalisation issues is to setup a .htaccess file in the root folder of your domain that uses a 301 redirect to send users, and link value, to the preferred url for your website. This is a very simple thing to do, and there many tutorials online to guide you through the process.

Another way of canonicalising your urls is by including a <link rel=”canonical”> tag in the HTML header of any webpage. The code itself looks like this.

<link rel=”canonical” href=”http://www.your-url.org”>

This informs search engines which is the “mother” url, and therefore which url should appear in the SERPS and receive the full value of any inbound links. This method is easy to implement on sites with a small number of pages, but perhaps the .htaccess method is preferable for sites with a lot of content.

Code Validity

While this is an area of ongoing debate amongst SEO professionals, most would agree that there is a certain degree of risk that comes with using “invalid” code. Google have actually come out and said that the validity of code is used as a part of their ranking algorithm.

There are a number of ways to check the validity of the code on your site. The World Wide Web Consortium offer an excellent free service, available at validator.w3.org. Simply type your url in, and it will look through your site’s coding and will highlight any errors. There’s also a similar service available that can be used to assess CSS on the same website.

Page Title and Headings

Page titles are amongst the most important factors in obtaining good rankings. For those not in the know, the title is the text that appears in the blue bar at the top of the browser and is also displayed as the text in bold at the top of each search engine result. They should be keyword rich without being spammy. In the majority of situations, a good formula for page title success would usually look something like this;

Page Specific Keyword | Section Keyword | Top Level Keyword | Brand

Additionally, search engines place an emphasis on words that appear inside <h> tags. So ideally, these should be keyword rich and based on search terms that are frequently used and relevant to the products/services displayed on the page.

Meta Descriptions, Content and Images

Meta descriptions are meant to be what appears under titles in search engine results, although there have been a few bizarre cases in the past where external sources have been used. But on the whole, the description should be enticing, keyword rich and be between around 150-160 characters long. They should also be different for each page, as this can be perceived as an HTML error otherwise.

Content and Images

The content on each page should also be different, as all of the popular search engines will not index pages with duplicate content, which could remove some pages of your site from SERPs until this is resolved. The actual text on each page should be keyword rich and informative. But always remember that you’re writing for people, not for search engines. Ensure that the content makes perfect sense and portrays your organisation in a good light.

Images are fairly simple to get right. In the coding of the site, the <alt> tags should contain keywords and be descriptive of the image itself. The same applies to the filename itself and the individual words should be hyphenated to make it as easy as possible for search engine spiders to read it.

So, if you run a site that sells charcoal Justin Bieber drawings, and it contains an image of a charcoal sketch of Justin Bieber, ideally the image’s coding would read –

<img src=”charcoal-sketch-of-justin-bieber.png” alt=”charcoal sketch of justin bieber”>

So, there it is. Not too hard I hope? Whilst by no means is this guide the be all and end all of onsite optimisation, these are the basics you simply have to get right if you want your site to rank as well as it possibly can.

Tom Sizer-James

Tom Sizer-James

Contributor


Tom Sizer-James is an Account Executive at Browser Media.