An Approach in Big Data Analytics to Improve the Velocity of Unstructured Data Using MapReduce

An Approach in Big Data Analytics to Improve the Velocity of Unstructured Data Using MapReduce

Sundarakumar M. R., Mahadevan G., Ramasubbareddy Somula, Sankar Sennan, Bharat S. Rawal
Copyright: © 2021 |Pages: 25
DOI: 10.4018/IJSDA.20211001.oa6
Article PDF Download
Open access articles are freely available for download

Abstract

Big Data Analytics is an innovative approach for extracting the data from a huge volume of data warehouse systems. It reveals the method to compress the high volume of data into clusters by MapReduce and HDFS. However, the data processing has taken more time for extract and store in Hadoop clusters. The proposed system deals with the challenges of time delay in shuffle phase of map-reduce due to scheduling and sequencing. For improving the speed of big data, this proposed work using the Compressed Elastic Search Index (CESI) and MapReduce-Based Next Generation Sequencing Approach (MRBNGSA). This approach helps to increase the speed of data retrieval from HDFS clusters because of the way it is stored in that. this method is stored only the metadata in HDFS which takes less memory during runtime compare to big data due to the volume of data stored in HDFS. This approach is reduces the CPU utilization and memory allocation of the resource manager in Hadoop Framework and imroves data processing speed, such a way that time delay has to be reduced with minimum latency.
Article Preview
Top

1. Introduction

In this era big data makes a new revolution in day to day activities of social media, health care, banking sector, Military division, and industries. But the problem of big data reveals in the form of accessing their volume, variety, and velocity. Because now a day’s data generated by humans as well as machines controlling is not an easy job with old techniques. The place where it has a lot of formats deals with a major problem. Moreover, the speed of the data retrieval and accessing from the data warehouse seems tremendous challenges and issues for stream processing. Big Data can be processed like batch, periodic, Near to real-time and real-time makes conflict on cluster configuration. Batch processing doesn’t support iterative and multi pass operations. Digital data like structured, semi-structured and unstructured formats storage is typical challenge environment. While extracting the data from those clusters the time of retrieval is high using a map-reduce method. All the input sends as Single Pass which means a set of smaller files group. Here the issues are multiple passes and real-time data integration is not possible in map-reduce for data processing using old methods. Hadoop Distributed File System storage allows a huge volume of data storage in the form of scale-out architecture. The following Figure 1 shows the challenges and issues in big data while doing the data processing.

Figure 1.

Big Data challenges and Solutions

IJSDA.20211001.oa6.f01

To solve the problem of latency delay in big data many tools and frameworks have been used like Apache Hadoop, Apache Spark. However, Map Reduce and HDFS methods are used to provide solutions for the data retrieval problem in big data sets (McCreadie et al., 2012). But the Map-Reduce phase has taken more time to complete multiple jobs. Because of that, store and retrieve data from the HDFS is also taking time. Most of the data mining concepts and algorithms are provided different solutions to cover this problem. Perhaps, it provides near to real-time processing of data. When real-time data retrieval scenario comes, none of the techniques give solutions for time consumption in data retrieval. Eventually, big data analytics can be done by CAP (Consistency, Availability, and Partition) theorem and Shared Nothing Architecture (SNA) (Duggal & Paul, 2013). When big data can be processed by Map Reduce concept Machine Learning Algorithms are used to segregate the tasks based on its Metadata. The complete tasks in the Map phase is divided into smaller tasks and it will be processed by mapper () function (Alfonseca et al., 2013). Separate keys are assigned to the tasks given by the clients and that has to be shared among the mapped function elements. Different algorithms (Velusamy et al., 2013), (R. Somula & Sasikala, 2019) were used to share those keys with computational logic and other concepts for security purposes. Using these scenarios data processing in HDFS with Map Reduce concepts is very clumsy. Data mining techniques association rules, K means, and Nearest Neighbor cluster algorithms and other concepts are not provide real-time data retrieval processing. Apache Spark will provide a solution for real-time data processing with in-memory analytics techniques. In Apache Spark, both databases and data warehouse engines are located on the same block so that it can perform very fast when compared with ancient techniques (R. Somula et al., 2019).

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 5 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2014)
Volume 2: 4 Issues (2013)
Volume 1: 4 Issues (2012)
View Complete Journal Contents Listing