Big Data Analysis: Basic Review on Techniques

Big Data Analysis: Basic Review on Techniques

Arpit Kumar Sharma, Arvind Dhaka, Amita Nandal, Kumar Swastik, Sunita Kumari
DOI: 10.4018/978-1-7998-7103-3.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The meaning of the term “big data” can be inferred by its name itself (i.e., the collection of large structured or unstructured data sets). In addition to their huge quantity, these data sets are so complex that they cannot be analyzed in any way using the conventional data handling software and hardware tools. If processed judiciously, big data can prove to be a huge advantage for the industries using it. Due to its usefulness, studies are being conducted to create methods to handle the big data. Knowledge extraction from big data is very important. Other than this, there is no purpose for accumulating such volumes of data. Cloud computing is a powerful tool which provides a platform for the storage and computation of massive amounts of data.
Chapter Preview
Top

1. Introduction

“Data explosion” refers to the sudden and rapid growth of data. Sometimes it is also referred as “data tsunami” (IBM, 2014). There have been many milestones in the history indicating this phenomenon. Each time some new technique for storing information was invented and a period of data explosion followed (Big Data, 2015; Storage Engine, 1951). Data explosion can be dated back to the period when paper and the printing press were invented, and it still occurs with the advancements introduced in the digital media (IBM, 2015). The latest example is the growth in amount of user generated content (UGC) (Backupify, 2015). UGC includes all the images, posts, and videos etc. which are posted online by the users of any social media platform. Keeping aside the fact that it is not easy to handle the big data; its proper analysis can be profitable to the fields concerned with it, for e.g. business, transport, medical services etc (Bhardwaj, 2015). This is because meaningful information can be extracted from big data (through data mining) which has proved to be helpful for making future decisions. Using big data, organizations can know the customers’ reactions to the goods and services provided by them, and they can also evaluate how the name of company is faring within the market (Grolinger, 2014; Park, 2015). In this way, big data creates an environment where each and every type of data is available, only the need is to properly analyze it to extract useful information from it and use it in a way for the welfare of the society. Big data is not useful as a whole, only a small part of it can be useful at a time for a specific purpose. The current need of the situation is to recognize the right data and extract it in the most feasible way possible without disturbing the privacy and safety of the data even if it is in large amounts. The structure of the paper starts from the definition of big data, how it is recognized, analysed and transformed, then moves on to the different frameworks and platforms that are used for the analysis, and at last how the machine learning algorithms are integrated in big data analytics.

Key Terms in this Chapter

Data Redundancy: It is the condition in which same piece of data is stored in two different places in the data base.

MapReduce: MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.

Hadoop: Hadoop is an open source distributed processing framework that manages data processing and storage for big data applications in scalable clusters of computer servers.

Big Data: It is the collection of such a huge amount of data (both structured and unstructured) with much complexity that traditional data management tool cannot store or process it efficiently.

Complete Chapter List

Search this Book:
Reset