Affiliations 

  • 1 Faculty of Law, University of Malaya, Kuala Lumpur, Malaysia
Account Res, 2023 May;30(4):219-245.
PMID: 34569370 DOI: 10.1080/08989621.2021.1986018

Abstract

Popular text-matching software generates a percentage of similarity - called a "similarity score" or "Similarity Index" - that quantifies the matching text between a particular manuscript and content in the software's archives, on the Internet and in electronic databases. Many evaluators rely on these simple figures as a proxy for plagiarism and thus avoid the burdensome task of inspecting the longer, detailed Similarity Reports. Yet similarity scores, though alluringly straightforward, are never enough to judge the presence (or absence) of plagiarism. Ideally, evaluators should always examine the Similarity Reports. Given the persistent use of simplistic similarity score thresholds at some academic journals and educational institutions, however, and the time that can be saved by relying on the scores, a method is arguably needed that encourages examining the Similarity Reports but still also allows evaluators to rely on the scores in some instances. This article proposes a four-band method to accomplish this. Used together, the bands oblige evaluators to acknowledge the risk of relying on the similarity scores yet still allow them to ultimately determine whether they wish to accept that risk. The bands - for most rigor, high rigor, moderate rigor and less rigor - should be tailored to an evaluator's particular needs.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.