If you are in the SEO industry, the duplicate content can become one of your nightmares. If you are a savvy webmaster, you will definitely not want to post the same content twice. But, sometimes, despite all our efforts, duplicate content gets published without our knowledge. How can you prevent them from publishing? Here are some simple SEO checks.

seo-search-engine-pagerank-links.jpg

Scraper Sites: As you cannot reclaim your stolen content with the help of any Internet police, you can use codes on your sites. The codes will prevent the scrapers from using your content as theirs. Instead of the relative URLs, you must use the absolute ones. For example

When you are using the relative URLS, the browser may assume that the link is directed to a page within the browser, you are presently in. This assumption by Google might result in terrible outcomes for you. If your developer has not re-coded the entire site, you can use the canonical tags which are self-referencing in nature. While the scraper places any duplicate content from your site, the canonical tags will not change the place and Google will know the original source is your site.

Mirrored Subdirectories :  If your business operates in various geographical locations , a primary landing page must be present. The users can easily select the appropriate location and can be directed to the ideal subdirectory. For example:

Url 1: www.beautifulbaskets.com/au

Url 2: www.beautifulbaskets.com/fr

Though it is very logical, you must always evaluate before setting up these two different subdirectories. Both the subdirectories may be similar in the context of the products or the content. In managing this issue, you can take help from Google Search Console while setting up the location targeting. The SEO Company Mumbai can guide you to set up the entire process.

Http and Https URLS :  This is one of the fastest ways for checking whether your site contains both the live versions or not. Your site can be redirected to the Https version with the 301 redirect. For improving the security, many sites have only chosen some selected pages for implementing the Https. For example, the checkout and the ‘Login’ pages do have the added security with https. Therefore, whenever the crawler is visiting the pages, it is actually creating two different versions of the same site. Similarly, you should check the www and the non www version of your site.

Syndicated Content:  If you are publishing the content in front of the new audience, syndication can be considered as a great method. But, you must set the proper terms and conditions for the interested persons who want to publish the content. You must ask them to use the canonical tags, so that the search engines can understand the original source. The syndicated content can also be no-indexed, so that the duplicate content issues never arise.

Lost Subdomains:  You may have canceled the subdomains and selected the subdirectory. You may also have created a totally new site. Your old content may still be alive and can find similarity with your new site. So, it is always advisable that you use the 301 redirect for the discontinued subdomains. This process is very important, if your old site contains a huge number of backlinks.

It is essential that you properly handle the duplicate content for avoiding the issues that can prevent your new pages from being indexed or crawled. Some of the excellent tools are no index/nofollow tags, canonical tags, 301 redirects. These checks can be kept in the SEO monthly check routine for reducing the duplicate content issues.

 

 

 

Tom Parillo

Tom Parillo

I am interested in all things technology, especially automation, robotics and tech that helps change how society will live in the future.