Finding Relevant Documents in a Search Engine Using N-Grams Model and Reinforcement Learning

Finding Relevant Documents in a Search Engine Using N-Grams Model and Reinforcement Learning

Amine El Hadi, Youness Madani, Rachid El Ayachi, Mohamed Erritali
Copyright: © 2022 |Pages: 17
DOI: 10.4018/JITR.299930
Article PDF Download
Open access articles are freely available for download

Abstract

The field of information retrieval (IR) is an important area in computer science, this domain helps us to find information that we are interested in from an important volume of information. A search engine is the best example of the application of information retrieval to get the most relevant results. In this paper, we propose a new recommendation approach for recommending relevant documents to a search engine’s users. In this work, we proposed a new approach for calculating the similarity between a user query and a list of documents in a search engine. The proposed method uses a new reinforcement learning algorithm based on n-grams model (i.e., a sub-sequence of n constructed elements from a given sequence) and a similarity measure. Results show that our method outperforms some methods from the literature with a high value of accuracy.
Article Preview
Top

1. Introduction

Since the most recent decade, search engines have become an essential tool in our regular daily life. At the point when we search for an information, we frequently go to our preferred search engine and look at the returned pages, users of web search tools calmly hope to get precise and close prompt responses to questions and demands, only by entering a short query — a couple of words — into a text box and tapping on a search button. The system of these search engines return a list of web pages containing the terms in the query, this list is constructed by identify eliminate duplicate and redundant pages, compute a score of each page to rank the list of web pages, then return the top results of the list, but these results do not always return the desired information. This is a major issue in information retrieval. Information retrieval researchers have focused on a few key issues, one of these issues is relevance. A document is considered relevant to a given query if its contents (completely or partially) satisfy the information need represented by the query, though this sounds simple, there are many factors that come in a person’s decision as to whether a specific document is relevant. These factors should be taken into consideration when designing algorithms for comparing text and ranking documents to get the best result in a search engine. There are many fields that help us to get relevant documents; one of them is query reformulation. Users of search engines usually have interaction in a process of query formulation and reformulation if they are unsatisfied with the results from the initial query so as to meet their information desires. An initial query is given and then updated in light of the results obtained until the user get a set of relevant documents and get satisfied. This process is known as query reformulation. There are a minimum of 2 reasons why query reformulation occurs.

Firstly, the user could have a quite specific information would like in mind but is unsure how to express that require in the query language. Secondly, the user’s information want might alter as a consequence of examining the search results. For example a user search for the query ”quilting” then get a list of quilting stores, then decide to update their information to get stores in their location. Text similarity measures play associate degree progressively important role in text connected research and applications in tasks like information retrieval, text classification, document cluster, topic detection, topic following, questions generation, question answering, essay scoring, short answer scoring, machine translation, text report and others. Finding similarity between words may be a fundamental part of text similarity that is then used as a primary stage for sentence, paragraph and document similarities. Words will be similar in 2 ways in which lexically and semantically. Words are similar lexically if they have the same character sequence. Words are similar semantically if they have the same thing, same means, used in the same way, and one is a type of the other.

In this paper, we propose a new approach to improve the results of a search engine based on reinforcement learning. Reinforcement learning (RL) (Sutton & Barto, 2013) is a learning technique in which an agent learns from the interactions with the environment by trial and-error, Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal. In an essential way, they are closed-loop problems because the learning system’s actions influence its later inputs. Moreover, the user is not told which actions to take, as in many forms of machine learning, but instead must discover which actions yield the most reward by trying them out. In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These three characteristics—being closed-loop in an essential way, not having direct instructions as to what actions to take, and where the consequences of actions, including reward signals, play out over extended time periods—are the three most important distinguishing features of reinforcement learning problems.

The action in our system corresponds to changing the n in n-gram for calculating the similarity between a query and documents in our corpus, and the reward is the accuracy of the result obtained, all that is with the use of the semantic similarity and the N-grams model.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing