HOW DOES PLAGIARISM AFFECT ORGANIC POSITIONING?

It is often said that duplicate content on a website is one of the most common problems and that, in turn, it can affect the positioning of a URL more.

This last statement is not entirely true, since Google does not directly penalize a web page for containing duplicate articles, although it does not mean that it could harm it through other methods.

When Google detects duplicate or plagiarized content, all it does is organize the search results according to its criteria about what may be most interesting to the user. The rest of the links that have the same content can make them disappear from the first search results so as not to offer the consumer the same content over and over again in all the searches they perform. You may, however, outsource your backlinks in order to improve the SEO of your website.

BUT… WHAT DOES GOOGLE CONSIDER DUPLICATE CONTENT?

It can be said that a website has duplicate files when any of its contents (be it an image, a text, a video,) can be found at another URL, with a total coincidence (as long as it is not a very powerful file). informative as a YouTube video or any other platform).

The one in charge of detecting all this and organizing it according to Google’s criteria is the Panda algorithm, which we talked about previously in this blog.

WHAT ARE THE CAUSES OF THIS PLAGIARISM?

The answer may seem simple by simply answering that it occurs if someone copies and pastes an article seen in another URL and publishes it on their blog or web page. Yes, this can be a form of plagiarism but it is not the most frequent. Most of the duplicate content detected by Google comes from internal problems in our own web pages.

WHAT ARE THESE INTERNAL CAUSES AND HOW TO SOLVE THEM?

  • Creation of a version for mobile devices: Usually the versions for mobile devices generate their own URL different from our main website seen from a desktop computer. If we are not able to make Google understand that it is the same website, enabling that our website is responsive and therefore that it is only an expansion of it and not a new one, the search engine algorithm may think that it is a plagiarized content from another totally different website.
  • Generation of URLs in content: Sometimes we have websites that assign a random URL to display the different contents of our website each time we preview it. Being a different URL each time, since it is generated automatically, it can be a source of error for the Google algorithm that can interpret it again as plagiarized content.
  • Not having established the main domain: If the main domain has not been indicated on your website, it could be visited both with “www” and without them, which can lead to some confusion and that Google establishes that it is two domains different. This is called a “non-canonical URL”.

The solution is simple. Setting a primary or preferred domain in Search Console would suffice.

  • Secure connection: if your website has SSL encryption and it has not been configured correctly, it could be accessible from HTTP:// as HTTPS:// and thus be considered two totally independent URLs. It is advisable to carry out a correct implementation of the security certificate in order to avoid this problem.
  • Numbering of pages or search results: Any website that uses a numbering for any product or search results is susceptible to being catalogued as plagiarized content. To avoid this, it is always recommended to use personalized titles and descriptions for each of the pages or products. With this action, apart from avoiding the aforementioned, we will also positively influence the improvement of organic positioning.