Ranking is used in many fields such as information retrieval, social science, and natural language systems. Commercial web search engines utilize ranking to locate the most relevant documents in response to queries. E-commerce sites employ ranking to suggest products that users may want to consider purchasing. In all of the above scenarios, the ranking problem is reduced to arranging objects in order of importance to an end user based on the user’s input, which is typically text-based (Langville & Meyer, 2012). Contrary to the common belief that ranking is just simple sorting, there are numerous challenges associated with the ranking problem in the field of information retrieval. The most significant difficulty with solving the ranking problem is that most objects don’t have an obvious/trivial attribute to sort by. For instance, web pages do not have any clear attributes to sort by according to a user’s query. Also, it is difficult to directly quantify or model why and how a user thinks a document is relevant since this is partly subjective to the user. Furthermore, another major challenge is handling varied and unexpected user input and still achieving consistent performance. Another challenge is ensuring scalability of the system as the dataset to rank and search grows.
1.1 Existing Ranking Techniques
Simple ranking techniques are based solely on isolated features such as Boolean (Lashkari et al., 2009), Vector Space (Lee et al., 1997), Latent Semantic Indexing (Kettani & Newby, 2010), BM25 (Sari & Adriani, 2014), Language Model for Information Retrieval (Jin et al., 2002), PageRank (Page et al., 1999), and HITS (Ding et al., 2002). Boolean (Lashkari et al., 2009) ranking reduces both the query and documents to a set of words and then utilizes Boolean logical operators to determine document rank. Vector Space (Lee et al., 1997) model converts queries and documents into vectors and then sorts by the cosine similarity of the query and each document vector. The vectors are typically the weights of each term in respect to the local and global context. Latent Semantic Indexing (Kettani & Newby, 2010) operates through constructing a matrix of documents with terms, which is then dimensionally reduced. A similarity metric such as cosine similarity is then computed between the term vectors of the query and each document. BM25 (Sari & Adriani, 2014) calculates ranking with a bag-of-words approach, which compares the terms in a query with the terms in a document. Language Model for Information Retrieval (Jin et al., 2002) inverts the typical approach utilized in probabilistic information retrieval and ranks by the probability of a language model of a document producing the desired query. PageRank (Page et al., 1999) and HITS (Ding et al., 2002) are both link based ranking features that measure a page’s value from its incoming and outgoing hyperlinks. However, all of these features are not sufficiently sophisticated in isolation to capture the intricacies of most applications, such as modern-day web search.