Lets' talk
09915337448

SEO Tutorial

Link Out To Related Sites

Concerning on-page SEO best practices, I usually link out to other quality relevant pages on other websites where possible and where a human would find it valuable.

I don’t link out to other sites from the homepage. I want the Pagerank of the home page to be shared only with my internal pages. I don’t like out to other sites from my category pages either, for the same reason.

I link to other relevant sites (a deep link where possible) from individual pages and I do it often, usually. I don’t worry about link equity or PR leak because I control it on a page-to-page level.

This works for me, it allows me to share the link equity I have with other sites while ensuring it is not at the expense of pages on my domain. It may even help get me into a ‘neighbourhood’ of relevant sites, especially when some of those start linking back to my site.

Linking out to other sites, especially using a blog, also helps tell others that might be interested in your content that your page is ‘here’. Try it.

I don’t abuse anchor text, but I will be considerate, and usually try and link out to a site using keywords these bloggers / site owners would appreciate.

The recently leaked Quality Raters Guidelines document clearly tells web reviewers to identify how USEFUL or helpful your SUPPLEMENTARY NAVIGATION options are – whether you link to other internal pages or pages on other sites.

Point Internal Links To Relevant Pages

I link to relevant internal pages in my site when necessary.

I silo any relevance or trust mainly via links in text content and secondary menu systems and between pages that are relevant in context to one another.

I don’t worry about perfect silo’ing techniques anymore, and don’t worry about whether or not I should link to one category from another as I think the ‘boost’ many proclaim is minimal on the size of sites I usually manage. Continue reading

Does W3C Valid HTML / CSS Help Rank?

Does W3C Valid HTML / CSS Help Rank?

Above – a Google video confirming this advice I first shared in 2008.

Does Google rank a page higher because of valid code? The short answer is no, even though I tested it on a small-scale test with different results.

Google doesn’t care if your page is valid HTML and valid CSS. This is clear – check any top ten results in Google and you will probably see that most contain invalid HTML or CSS. I love creating accessible websites but they are a bit of a pain to manage when you have multiple authors or developers on a site.

If your site is so badly designed with a lot of invalid code even Google and browsers cannot read it, then you have a problem.

Where possible, if commissioning a new website, demand, at least, minimum web accessibility compliance on a site (there are three levels of priority to meet), and aim for valid HTML and CSS. Actually, this is the law in some countries although you would not know it, and be prepared to put a bit of work in to keep your rating.

Valid HTML and CSS are a pillar of best practice website optimisation, not strictly a part of professional search engine optimisation. It is one form of optimisation Google will not penalise you for.

Addition – I usually still aim to follow W3C recommendations that help deliver a better user experience;

Hypertext links. Use text that makes sense when read out of context. W3C Top Ten Accessibility Tips

Which Is Better For Google? PHP, HTML or ASP?

Google doesn’t care. As long as it renders as a browser compatible document, it appears Google can read it these days.

I prefer PHP these days even with flat documents as it is easier to add server side code to that document if I want to add some sort of function to the site.

Subdirectories or Files For URL Structure

Sometimes I use subfolders and sometimes I use files. I have not been able to decide if there is any real benefit (in terms of ranking boost) to using either. A lot of CMS these days use subfolders in their file path, so I am pretty confident Google can deal with either. Continue reading

Absolute Or Relative URLs

My advice would be to keep it consistent whatever you decide to use.

I prefer absolute URLs. That’s just a preference. Google will crawl either if the local setup is correctly developed.

  • What is an absolute URL? Example – http://www.surjeetthakur/search-engine-optimisation/
  • What is a relative URL? Example – /search-engine-optimisation.htm

Relative just means relative to the document the link is on.

Move that page to another site and it won’t work.

With an absolute URL, it would work.

Duplicate Content Penalty

Webmasters are often confused about getting penalised for duplicate content, which is a natural part of the web landscape, especially at a time when Google claims there is NO duplicate content penalty.

The reality in 2016 is that if Google classifies your duplicate content as THIN content, then you DO have a very serious problem that violates Google’s website performance recommendations and this ‘violation’ will need ‘cleaned’ up.

Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin…..

It’s very important to understand that if, in 2016, as a webmaster you republish posts, press releases, news stories or product descriptions found on other sites, then your pages are very definitely going to struggle to gain in traction in Google’s SERPs (search engine results pages).

Google doesn’t like using the word ‘penalty’ but if your entire site is made of entirely of republished content – Google does not want to rank it.

If you have a multiple site strategy selling the same products – you are probably going to cannibalise your traffic in the long run, rather than dominate a niche, as you used to be able to do.

This is all down to how the search engine deals with duplicate content found on other sites – and the experience Google aims to deliver for its users – and its competitors.

Mess up with duplicate content on a website, and it might look like a Google penalty as the end-result is the same; important pages that once ranked might not rank again – and new content might not get crawled as fast as a result.

Your website might even get a ‘manual action’ for thin content. Worse case scenario your website is hit by the GOOGLE PANDA algorithm.

A good rule of thumb is; do NOT expect to rank high in Google with content found on other, more trusted sites, and don’t expect to rank at all if all you are using is automatically generated pages with no ‘value add’.

Tip: Do NOT REPEAT text, even your own, across too many pages on your website.

H1-H6: Page Headings

H1-H6: Page Headings

I can’t find any definitive proof online that says you need to use Heading Tags (H1, H2, H3, H4, H5, H6) or that they improve rankings in Google, and I have seen pages do well in Google without them – but I do use them, especially the H1 tag on the page. Continue reading

Search Engine Friendly URLs

Clean URLs (or search engine friendly URLs) are just that – clean, easy to read, simple.

You do not need clean URLs in site architecture for Google to spider a site successfully (confirmed by Google in 2008), although I do use clean URLs as a default these days, and have done so for years. Continue reading

Alt Tags

Alt Tags

NOTE: Alt Tags are counted by Google (and Bing), but I would be careful over-optimizing them. I’ve seen a lot of websites penalised for over-optimising invisible elements on a page. Don’t do it.

ALT tags are very important and I think a very rewarding area to get right. I always put the main keyword in an ALT once when addressing a page. Continue reading