Web Bot Detection System Based on Divisive Clustering and K-Nearest Neighbor Using Biostatistics Features Set

Web Bot Detection System Based on Divisive Clustering and K-Nearest Neighbor Using Biostatistics Features Set

Rizwan Ur Rahman, Deepak Singh Tomar
Copyright: © 2021 |Pages: 27
DOI: 10.4018/IJDCF.20211101.oa6
Article PDF Download
Open access articles are freely available for download

Abstract

Web bots are destructive programs that automatically fill the web form and steal the data from web sites. According to numerous web bot traffic reports, web bots traffic comprises of more than fifty percent of the total web traffic. An effective guard against the stealing of the data from web sites and automated web form is to identify and confirm the human user presence on web sites. In this paper, an efficient k-Nearest Neighbor algorithm using hierarchical clustering for web bot detection is proposed. Proposed technique exploits a novel taxonomy of web bot features known as Biostatistics Features. Numerous attack scenarios for web bot attacks such as automatic account registration, automatic form filling, bulk message posting, and web scrapping are created to imitate the zero-day web bot attacks. The proposed technique is evaluated with number of experiments using standard evaluation parameters. The experimental result analysis demonstrates that the proposed technique is extremely efficient in differentiating human users from web bots.
Article Preview
Top

Introduction

At the present time, the human society is enormously dependent on the Internet, the most important source of communication. For this reason, the accessibility of the Internet is very significant for the growth of the civilized society. For example, the expansion and the success of Internet have changed the way of working of conventional services such as marketing, banking, and electoral system. All of these conventional services are now rapidly replacing by efficient web based applications. On the other hand, the intrinsic vulnerabilities of web application give possibility for a range of attacks on the web based applications.

For instance, web bot is a class of web security attack that creates an enormous threat to the security of every web application and web service (Heartfield et al., 2013). Principally, the web bot is a script or program which is developed to execute completely automated and repetitive task on web applications. The name bot is derived from the word robot, so it is also known as web robot. The intention of the development of web bots can either be bad or good. The functionality and the actions classify the web bot to be good web bot or bad web bot (Gilani et al., 2016). The most extensively used good web bot is web crawler. Its purpose is to index web sites and web applications in search engines (Thelwall, 2001). On the other hand, bad web bots are developed to carry out a range of destructive tasks (Rahman & Tomar, 2018).

The bad web bots are the reason of majority of security attacks on web sites. These web sites attacks are vary from small cyber crime like click fraud, Backlinks creations, and mass registration to big crimes like stealing of credit card information and credential stuffing (Thelwall et al., 2009). According to global web bot traffic report, web bots traffic comprises about fifty percent of the total web traffic (Zelfman, 2017). The distribution of web traffic is shown in Figure 1. From the figure it is clear that the thirty percent comes from good web bots and remaining twenty percent comes from bad web bots (Wang et al., 2014).

When the report critically examined, it illustrates the particular web sites such as small, medium and large are further exposed to web bot attacks in the year 2014. Thus, bad bots attacked categorically to all of these web sites. As depicted in the subsequent figure (Figure 2), the proportion of web traffic coming from bad web bots is steadily thirty percent, irrespective of its size.

In reality, bad bots for instance ScrapeBox (Shin et al., 2011) and XRumer (Hayati et al., 2012) are created for generating the Backlinks, web scraping, content scraping, form spamming, and automated registration of web services such as mailing. Web Bot defense mechanism can be categorize into two main approaches namely preventive approach and detective approach. Preventive approach requires direct human participation such as Turing test in form of CAPTCHA (Rahman & Tomar, 2012). However, advanced web bot such as XRumer is capable of bypassing this preventive approach generally used by web applications through solving CAPTCHA using optical character recognition (OCR). In fact, it has been reported in the year 2008, the web bot XRumer effectively evaded Google and Hotmail CAPTCHA to make enormous number of accounts with these web services. Similarly, Decaptcha application defeated CAPTCHA of Wikipedia just about twenty five percent of the time (Bursztein & Bethard, 2009).

Figure 1.

Web Traffic by Type

IJDCF.20211101.oa6.f01

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 3 Issues (2022)
Volume 13: 6 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing