Article Preview
Top1. Introduction
In the process of learning, the evaluation is a main task to know the degree of the students ‘understanding of any material. In fact, it is an integral part of teaching, because it determines whether the objectives and the educational criteria of the lessons are achieved or not (Shamizanjani, Naeini, & Nouri, 2014). Many researchers say that the multiple-choice questions serve only to assess the lower levels in the taxonomy of Bloom. But when it is necessary to measure the upper levels, the questions with open answers are the most appropriate. Among the subcategories of the opened answer, we are interested in a short open answer and its remote evaluation. It is a built response element and pupils have to build their answers in natural language as well as to make the same profit without any intervention. This implies a different form of cognitive processing and memory recovery in relation to the elements of the answer selected. The questions with open and short response are a subclass of responses or writings of free text. These are open questions which necessitate learners to generate response. They are frequently used in analyses to test the thoughtful and basic accepting (low thinking-related levels) about the subject before asking more broad questions on the subject. The questions with open and short responses do not have a plain and common thing drug structure (Griffin et al., 2014). They can be used as a part of giving feedback within an ongoing class, and showing how effective a course was at the end, because their structure is almost the same as the inspection requests. Students are extra acquainted with the preparation and sensation fewer nervous. Evaluation by computer and which are known under the name of evaluation assisted by computer has been studied since the sixties. It is a branch to innovate e-learning that has attracted more attention these last years, mainly for a rapid assessment of answers issues. In this task of research, we are interested in the evaluation of the questions of short open answers within a simple Arabic sentence form. These are questions for which the learner must provide a short answer corrected with a standardized grid (model). They allow assessing the capacity of the problem analysis. Their reliability is relatively good, but their correction is however long and more difficult to standardize as well as automate. They differ in terms of evaluation techniques, algorithms and notation measures. We can note that the recent developments have seen the introduction of assessment engines based on the natural language, but no system uses the ontological aspect as a technique of knowledge representation and especially with the arabic language. In the process of learning, the use of ontology allows you to find what the student has educated, and the problems met, also to discover concepts that are not yet agreed and must consequently be treated better (Litherland et al., 2013; Kardan et al., 2016). In our research, we will use the ontologies knowledge representation in the process of short answer questions evaluation. It is an effective educational technique.
The rest of this papers is organized as follows; in section 2, presentation the previous works on the evaluation of the learners as well as research problems that we discuss in our work. In section 3, description of the approach for the evaluation of short and open answers questions based on the ontology using concept maps as well as the different functions and definitions used. The results our approach application in real terms are presented in section 4. Finally, we present the conclusions and our future work in section 5.