Boosting Algorithm and Meta-Heuristic Based on Genetic Algorithms for Textual Plagiarism Detection

Boosting Algorithm and Meta-Heuristic Based on Genetic Algorithms for Textual Plagiarism Detection

Hadj Ahmed Bouarara, Reda Mohamed Hamou, Amine Rahmani, Abdelmalek Amine
DOI: 10.4018/978-1-5225-8057-7.ch020
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Day after day, the plagiarism cases increase and become a crucial problem in the modern world, caused by the quantity of textual information available in the web and the development of communication means such as email service. This paper deals on the unveiling of two plagiarism detection systems: Firstly boosting system based on machine learning algorithm (decision tree C4.5 and K nearest neighbour) composed on three steps (text pre-processing, first detection, and second detection). Secondly using genetic algorithm based on an initial population generated from the dataset used a fitness function fixed and the reproduction rules (selection, crossover, and mutation). For their experimentation, the authors have used the benchmark pan 09 and a set of validation measures (precision, recall, f-measure, FNR, FPR, and entropy) with a variation in configuration of each system; They have compared their results with the performance of other approaches found in literature; Finally, the visualisation service was developed that provides a graphical vision of the results using two methods (3D cub and a cobweb) with the possibility to have a detailed and global view using the functionality of zooming and rotation. The authors' aims are to improve the quality of plagiarism detection systems and preservation of copyright.
Chapter Preview
Top

1. Introduction And Problematic

Nowadays, with the increasing numbers of documents available on the web and the development of communication means, find the possessor of the information has become a crucial subject. In the recent few years, we have observed clearly that the cases of plagiarism in the works of scholars and researchers have been increased. The basics of this problem are numerous and crossed because there are many websites which articles and ready documents are available, these sites are ideal for the plagiarists. For this reasons developing an automatic plagiarism detector tool has become a necessity.

The most relevant case of plagiarism was designed by the Germany minister of education and research SCHAVAN ANNETTE who put his resignation because the Dusseldorf University revoked her doctorate that contains too many passages “borrowed from others”. In a country where the title of Doctor is a valuable, we do not mess with plagiarism.

In order to give you a global view about our work, the plagiarism is defined as the wrongful misuse of stealing thoughts, ideas or words from the original work of someone, in the same language or in a different language (Basile, 2009). Depending on the behaviour of plagiarist, we can distinguish several plagiarism types such as:

  • The Plagiarism Verbatim: When the plagiarist copied the words or sentence from a book, magazine or web page as like it, without putting it in quotation marks and / or without citing source.

  • The Paraphraser: When the words or the syntax of sentence copied are changing.

  • The cases of plagiarism the most difficult to detect are plagiarism with translation and plagiarism of ideas.

In the former years, the classical method to detect plagiarism is to examine manually each document that represents a slow process. Recently, two automatic plagiarism detection families have emerged:

  • The external plagiarism detection, which allows comparing the suspicious document with the reference documents, based on external information (Stein, 2007).

  • The internal plagiarism detection based on stylometry method. Each document has a specific style will be compared to a base of style. The case of plagiarism will be detected depending on how the document is writing and if there is a change in style between the paragraphs (Meyer, 2007).

The classical plagiarism detection systems are face to many limits:

  • Detection Errors: the detection errors (classification of text plagiarised as no-plagirised and classification of no-plagiarised text as plagiarised) can cause many problems for researchers and students. For e.g. a researcher send a paper (really this paper is no-plagiarised) to a journal. The plagiarism detector system used by this journal detects that that this paper is plagiarised then this researcher will be blacklist automatically. It is a big problem in our scientific life.

  • The selection of parameters (similarity measure and text representation method)

  • Choice of Parameters: the majority of plagiarism detection systems are based on the parameters (distance measures and text pre-processing techniques). A poor choice of those parameters may cause a degradation of detection performance.

  • Response time

  • Ambiguity of Natural Language: It is a major problem speculated by the difference of vocabulary used for expression of the contents of electronic texts.

  • Visualisation of Plagiarism Detection Results: The majority of current plagiarism detection systems present the texts (plagiarised or no-plagiarised) as a list which makes the visualisation of all the existing texts difficult. In order to better satisfy the needs of users, a graphical visualization of results has become necessary, to provides an interactive graphical interface with the results in order to view all the result returned by our system.

  • Multiplicity of the Texts: the content of the texts can occur from different fields (Marketing, biological, multimedia, sports, computer science...etc.).

Complete Chapter List

Search this Book:
Reset