Comparing Big Data Analysis Techniques

Comparing Big Data Analysis Techniques

Copyright: © 2024 |Pages: 24
DOI: 10.4018/979-8-3693-0413-6.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Big data refers to the large volume of data which can be processed to get the relevant, required, and meaningful data within less time. It is required to apply sophisticated methods and techniques to process big data. There are many methods and techniques available, like regression analysis, time series analysis, sentiment analysis, descriptive analysis, predictive analysis, association analysis with sampling, machine learning, visualization techniques, classification, and qualitative and quantitative analysis. In the future, there is a need to enhance the performance of these techniques due to increasing size of data. Many applications are based on any of these techniques to process the large volume of data in order to retrieve the meaningful data. It is expected, big data analysis techniques will filter and process the large volume and find the relevant one. Analysis of big data is very helpful in many areas like businesses, industries, and other sectors.
Chapter Preview
Top

1. Introduction

Analyzing the big data is nothing but processing raw data to get meaningful and relevant data. Retrieved meaningful data can be useful in many applications which are based on different approaches like decision making, probabilistic analysis, and deterministic analysis. With the help of existing data analysis techniques, one can form the business strategies like identifying customer segments, customer retention, possible future business growth etc. However, it is suggested one can use data analysis technique as per the requirement i.e. type of input data and insights to uncover.

With the increasing size of users and applications data size is also increasing & in future it could be more than expected. Therefore, in future there would be a requirement of machines with good configurations & best analysis techniques. There are many existing analysis techniques available like regression analysis, time series analysis, sentiment analysis, descriptive analysis, predictive analysis, diagnostic analysis, association analysis with sampling, ensemble method for analysis, machine learning, visualization techniques, classification, and genetic algorithms and qualitative & quantitative analysis.

Big data refers to the large volume of data and can be characterized on the basis of volume, variety and velocity (V3 model of big data). Volume refers to the quantity of data and keeping the storage as per the requirement. Variety refers to the different data i.e. structured and unstructured data and can have different forms. Velocity refers to the keeping an eye on incoming data with the ever increasing size of users and applications. Big data analysis techniques must deal with the metadata i.e. data about data. Metadata reveals the quality, and characteristics about the data. To process and retrieve the meaningful data, it is expected that analysis techniques must be capable enough to process the metadata too. Identifying the ‘relevance’ among the different segments (different variables) data is very important & known as ‘association rule’ mining. It is also expected to process the real time data (RT data), qualitative and quantitative data in order to get the relevant data from large volume. Big data refers to processing the ‘n’ large datasets & more sophisticated big data analysis techniques can be applied to extract the relevant data with less processing time. Examples of big data are as follows – server web logs, patient records in hospitals, data related to surveillance, institutional data, large time series data, audio and video archives, and the most popular datasets from ‘e-commerce’ industries.

1.1 Challenges in Big Data Analysis

Following are the challenges in big data analysis –

  • Storage

  • Security

  • Quality

  • Scalability

  • Analysis issues

  • Selection of appropriate tool to analyze the large volume of data

  • Lack of trained people

Key Terms in this Chapter

Random Sample Partition (RSP) Blocks: Random sample partition data blocks used in sampling.

Clustering: Clustering is grouping the objects based on similarity.

Real Time Data: Data which is available to process as soon as it is generated.

SDLC: Software development life cycle – consists of steps must be followed for the development of a good software product.

Parts of Speech: Processing a sentences based on nouns, verbs, adverbs, etc.

Deep Learning: A subset of machine learning used for performing complex tasks.

Natural Language Processing (NLP): As its name suggest, it interprets, breaks and analyzes the human languages.

Complete Chapter List

Search this Book:
Reset