A Novel Scalable Signature Based Subspace Clustering Approach for Big Data

A Novel Scalable Signature Based Subspace Clustering Approach for Big Data

T. Gayathri, D. Lalitha Bhaskari
DOI: 10.4018/IJITWE.2019040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

“Big data” as the name suggests is a collection of large and complicated data sets which are usually hard to process with on-hand data management tools or other conventional processing applications. A scalable signature based subspace clustering approach is presented in this article that would avoid identification of redundant clusters. Various distance measures are utilized to perform experiments that validate the performance of the proposed algorithm. Also, for the same purpose of validation, the synthetic data sets that are chosen have different dimensions, and their size will be distributed when opened with Weka. The F1 quality measure and the runtime of these synthetic data sets are computed. The performance of the proposed algorithm is compared with other existing clustering algorithms such as CLIQUE.INSCY and SUNCLU.
Article Preview
Top

Introduction

The two terms database and data-mining have different meanings. The former refers to an organized collection of data enabling easy to manage, access and update, while the latter term refers to the process of discovering interesting knowledge. Extraction of information about associated patterns, anomalies and significant structures from large amounts of data stored in databases, data warehouse and another kind of information repositories makes up for the knowledge that has to be discovered with data mining. In other words, the Data-mining can define, also known as the Knowledge discovery in databases (KDD) or Knowledge Discovery and data-mining, as the process of automatically searching for patterns like association rules for large volumes of data (Gupta, 2014). This search utilizes application of many computational techniques from statistics, information retrieval, machine learning and pattern recognition. The foremost objective of data mining includes extraction of required patterns from the database only and in a short time span. Data mining techniques can be classified into five categories, based on the type of patterns to be mined. They are summarization, classification, clustering, association and trends analysis (Gupta, 2014). Big data, in this context, refers datasets that grows rapidly to a size beyond the capability of conventional data tools used to manage, store and analyze them. Thus, Big data was defined as a mixture of both structured and unstructured data. The tremendous growth in big data can be attributed to factors such as availability of data, increase in storage capability and exponential increase in the processing power of the computing platforms. So, big data can be referred to the use of large data sets to handle the collection or reporting business data or data related to some recipients that help in decision making. Since big data has different meanings for a different context, multiple approaches and definitions are used to define big data. However, the size of the data set obviously is a prominent factor that defines the term big data. However, lately some other significant characteristics defining big data have surfaced. Laney in work in (Laney, 2001) suggested the three V’s namely volume variety and velocity are the dimensions of challenges that need to be encountered in data management. These three Vs now used to define the framework of big data (Chen et al., 2010, Kwon et al., 2014). There has been an imperative role of many leading institution and corporations in defining the attributes of big data. For example, Gartner Inc. defined big data as “Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making (Federal Big Data Commission. 2012)”. Similarly, The America Foundation defined big data to be a term that describes large volumes of high velocity, complex and variable data which require high-end technologies and techniques to capture, store, distribute, manage and analyze them (Gartner IT Glossary, 2017). These definitions validate that volume variety and velocity are key factors of big data. Big data mining is the process through which big data go through to extract relevant information from them.

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing