Plagiarism and AI Thresholds in Academic Theses: A Practical Framework for Fair Evaluation
- 1 day ago
- 4 min read
This paper examines how plagiarism review and AI-related review can be handled in academic theses through a clear and fair threshold model. The literature shows that text similarity is a useful screening tool, but it is not the same as plagiarism. Recent scholarship also shows that AI creates new questions about authorship, disclosure, and academic responsibility. Based on this scholarship, the paper applies the following working standard for thesis review: less than 10% = Acceptable, 10–15% = Needs Evaluation, and above 15% = Fail. The paper argues that this model is strongest when combined with human academic judgment and clear institutional guidance.
Introduction
Academic theses are expected to show original thinking, proper citation, and responsible scholarship. However, research on plagiarism has long shown that originality cannot be judged by percentages alone. Problems such as poor paraphrasing, patchwriting, weak source integration, and uneven academic writing skills may create textual overlap without representing the same level of academic misconduct in every case. For this reason, similarity reports should support evaluation, not replace it.
Literature Review
The literature on academic integrity now includes both traditional plagiarism and the growing role of generative AI. Recent studies show that AI can help with idea generation, language improvement, and research organization, but they also warn that uncritical or undisclosed use may weaken authorship and integrity. Research on university policy further suggests that institutions are moving toward guidance-based approaches that combine academic integrity rules with assessment design and AI literacy. At the same time, studies on AI detection tools report false positives and false negatives, which means AI detector scores should not be treated as final evidence on their own.
Methodology
This paper uses a conceptual review method. It draws on books and journal articles on plagiarism, academic integrity, and AI in higher education, then applies a practical evaluation standard for theses: Less than 10% = Acceptable; 10–15% = Needs Evaluation; Above 15% = Fail. In this framework, the percentage serves as a screening band, while the final judgment remains with qualified academic reviewers who examine context, citation quality, and the nature of the matched text.
Analysis
Under this model, a thesis with less than 10% similarity can normally be treated as acceptable, especially when the matched material comes from references, standard academic phrases, or properly cited quotations. A result in the 10–15% range should trigger closer review. In this band, the overlap may still be harmless, but it may also reveal weak paraphrasing, patchwriting, or inconsistent citation practice. A result above 15% should be placed in the fail category under this stated standard because it signals a serious originality concern that deserves formal academic action. Even then, the examiner should identify the specific reason for the overlap and record the evidence clearly.
AI requires a related but separate form of review. Because AI detectors can misclassify both human and machine-written text, thesis evaluation should focus less on raw AI scores and more on disclosure, consistency of voice, quality of sources, coherence of argument, and the student’s ability to explain the thesis process. This creates a more reliable and more educational model of integrity review.
Findings
The review supports three main findings. First, a threshold policy improves consistency across thesis evaluation. Second, similarity percentages are most useful when paired with expert reading rather than automatic judgment. Third, AI in thesis writing should be governed mainly through transparency and authorship responsibility, not detector scores alone. For SIU Swiss International University, this approach supports fairness, clarity, and academic development at the same time.
Conclusion
A strong thesis policy should be clear, fair, and educational. The standard used in this paper offers a practical screening framework: less than 10% = Acceptable, 10–15% = Needs Evaluation, above 15% = Fail. Yet the literature makes clear that percentages alone cannot define plagiarism, and AI scores alone cannot define authorship. The most effective approach is a balanced one that combines thresholds, human judgment, transparent AI rules, and student guidance in responsible academic writing.

References
Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299.
Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370.
Carroll, J. (2007). A handbook for deterring plagiarism in higher education (2nd ed.). Oxford Centre for Staff and Learning Development.
Dalalah, D., & Dalalah, O. M. A. (2023). The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. International Journal of Management Education, 21(2), 100822.
Eaton, S. E. (2021). Plagiarism in higher education: Tackling tough topics in academic integrity. Libraries Unlimited.
Khalifa, M., & Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5, 100145.
Memon, A. R. (2020). Similarity and plagiarism in scholarly journal submissions: Bringing clarity to the concept for authors, reviewers and editors. Journal of Korean Medical Science, 35(27), e217.
Moorhouse, B. L., Yeo, M. A., & Wan, Y. W. (2023). Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open, 5, 100151.
Pecorari, D. (2003). Good and original: Plagiarism and patchwriting in academic second-language writing. Journal of Second Language Writing, 12(4), 317–345.
Pecorari, D., & Petrić, B. (2014). Plagiarism in second-language writing. Language Teaching, 47(3), 269–302.





Comments