3 Ways To Get Rid Of Duplicate Pages

Depositphotos_13988197_s-20152Duplicate pages are nearly impossible to avoid when building or updating a website. This is especially true for eCommerce websites, because it is common to be able to access product pages from multiple sections of a website. For example, you can shop for drill bits by industry or by their parent page cutting tools.

While hard to sometimes avoid, duplicate pages on a website can have serious negative impacts on how your website is viewed by search engines. The biggest being, with multiple versions a page, search engines won’t know which page to index on page results.

Therefore getting search engines to only see one page of exact content on your site is key. Here are 3 ways to accomplish this, in their preferred order.

1. 301 Redirects

The most preferred option to eliminating duplicate pages is to setup 301 redirects.  This permanently redirects one page to the other. This essentially means that the page no longer exists. It is also said to pass along link juice from the page you are redirecting from to the page you are redirecting to.

I recommend using 301 redirects to handle all duplicate pages, however with some websites this is hard or resourceful to setup.

2. Canonical Tags

The second most desirable option is to setup canonical tags. This is a tag or snippet of code that you place on one of the pages, which then tells search engines to reference another page instead of the one with the tag, or duplicate content. Similarly, using canonical tags is said to pass along link juice from the one page to the other. 

Canonical tags look something like, <link rel=”canonical” href=”http:// example.com/page” />

However, there are some negatives to using canonical tags:

  1. It isn’t a permanent redirect.
  2. If you want to track the performance of a particular page, it will be challenging to gather consolidated metrics.
  3. You need to determine which of the two URL structures you want customers to see in search results. i.e. From our example above by from the industry page or the parent page?

3. Robots No Index

The final, and least preferred way, is using robots no index tags. With this you put a tag in the code of the page you don’t want search engines to index. While this does theoretically tell search engines to not index the page, it is more of a recommendation to not crawl this page, meaning search engines could potentially not follow it correctly.

In this instance, no link juice is transferred over.

Get Rid of Duplicate Pages the Right Way

A lot of websites, especially eCommerce sites run into this problem of having duplicate content, which can have negative effects on your organic rankings. I recommend implementing permanent 301 redirects, however if this is not possible, then try to implement canonical tags. Use robots no follow tags as a last resort option.

Along with having duplicate content, read 2 other common behind the scenes SEO mistakes websites makes >

Was Your Website Hit With A Google Algorithm Update?

We can help! Schedule a free website analysis to discover problems with your site and how to solve them.

About the author

At SVM E-Marketing Solutions, we strive to create content that provides value to our clients and the industrial/B2B community.

Related Posts