Information Retrieval in the Hidden Web

Information Retrieval in the Hidden Web

Shakeel Ahmed, Shubham Sharma, Saneh Lata Yadav
DOI: 10.4018/978-1-7998-8061-5.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Information retrieval is finding material of unstructured nature within large collections stored on computers. Surface web consists of indexed content accessible by traditional browsers whereas deep or hidden web content cannot be found with traditional search engines and requires a password or network permissions. In deep web, dark web is also growing as new tools make it easier to navigate hidden content and accessible with special software like Tor. According to a study by Nature, Google indexes no more than 16% of the surface web and misses all of the deep web. Any given search turns up just 0.03% of information that exists online. So, the key part of the hidden web remains inaccessible to the users. This chapter deals with positing some questions about this research. Detailed definitions, analogies are explained, and the chapter discusses related work and puts forward all the advantages and limitations of the existing work proposed by researchers. The chapter identifies the need for a system that will process the surface and hidden web data and return integrated results to the users.
Chapter Preview
Top

1. Introduction

Enormous amount of text, sound, video, and different reports are available and accessible on the Web, on various subjects. Users ought to have the choice to ascertain relevant data to justify their precise data requirements. The data can be sought out in two different ways: to utilize an inquiry motors or to peruse indexes coordinated by classifications (like Yahoo Directories). There is still a huge portion of the Internet data that isn't accessible (for example intranets and the private information bases). Data recovery (IR) is the errand of addressing, tapping away, coordinating, and accessing the data things. IR is not quite the same as information recovery, which is tied in with finding exact information in data sets with a given construction. In IR frameworks, the data isn't organized, it is contained in free structure in text (site pages or different records) or in sight and sound substance. The first IR frameworks carried out in 1970's were intended to work with little assortments of text (for model authoritative reports). A portion of these strategies are presently utilized in web search tools. The WWW is a software protocol that runs on the internet and allows the end-users to access the files stored in the computers which are interconnected by the internet. The WWW is the largest source of digital information. It has a diverse range of information about studies, fashion, politics, tourism, social networking, mail systems, vehicles, business, sports, cooking, countries, history, illegal activities, and drugs, and so on. The WWW has become a very essential part of everyone’s life. People from all walks use WWW to extract the desired information. According to the latest statistics as of January 2021 as shown in Fig.1, there were 4.66 billion active internet users worldwide - 59.5 percent of the global population. Of this total, 92.6 percent (4.32 billion) accessed the internet via mobile devices (Johnson, 2021).

Figure 1.

Global Digital Population users in Billions (Johnson, 2021)

978-1-7998-8061-5.ch003.f01

According to Internet Live Stats (Raghavan & Garcia-Molina, 2001), Real-Time international Statistics Project website every single second, roughly 7986 tweets are tweeted; beyond 66418 Google queries are searched, and beyond 2 million emails are sent. Which indicates the pace of growth of WWW. As the WWW is growing, therefore, the problem of managing and searching the information is important. To help the end-users to find desired information from this bulk information, various search engines have been developed. For e.g. Google, Bing, yahoo, Ask.com, AOL.com, etc. The information about a topic is scattered on the different types of web pages such as surface web unstructured web pages, hidden web structured web pages, file formats, etc. These search engines extract the data from all these parts of the WWW and present that data to the end-users.

So, it has become very difficult for the end-users to filter the desired data from different sections of WWW because of factors such as the size of results fetched, redundancy of data, data is hidden behind query interfaces and it is very arduous. End-user is required to endure every page, process it, and then only the desired data is extracted. The most promising solution is an information integrator, where the user can enter his/her query and he/she will get all the processed data available in different parts of WWW. It saves a lot of time for the user.

Complete Chapter List

Search this Book:
Reset