An Ensemble of Random Forest Gradient Boosting Machine and Deep Learning Methods for Stock Price Prediction

An Ensemble of Random Forest Gradient Boosting Machine and Deep Learning Methods for Stock Price Prediction

Lokesh Kumar Shrivastav, Ravinder Kumar
Copyright: © 2022 |Pages: 19
DOI: 10.4018/JITR.2022010102
Article PDF Download
Open access articles are freely available for download

Abstract

Stochastic time series analysis of high-frequency stock market data is a very challenging task for the analysts due to the lack availability of efficient tool and techniques for big data analytics. This has opened the door of opportunities for the developer and researcher to develop intelligent and machine learning based tools and techniques for data analytics. This paper proposed an ensemble for stock market data prediction using three most prominent machine learning based techniques. The stock market dataset with raw data size of 39364 KB with all attributes and processed data size of 11826 KB having 872435 instances. The proposed work implements an ensemble model comprises of Deep Learning, Gradient Boosting Machine (GBM) and distributed Random Forest techniques of data analytics. The performance results of the ensemble model are compared with each of the individual methods i.e. deep learning, Gradient Boosting Machine (GBM) and Random Forest. The ensemble model performs better and achieves the highest accuracy of 0.99 and lowest error (RMSE) of 0.1.
Article Preview
Top

1. Introduction

The accurate forecast or prediction of stock prices is specially focused issue for the investors and companies listed in the stock market. Non-stationary and non-linear time-series nature of stock prices makes the prediction results very complex and challenging (Cavalcante et al., 2016). Financial time series analysis is a very important source of information for stock market prediction (Oztekin et al., 2016). Finding hidden patterns is the requirement for analysis and prediction of the stock price actuations. The pre-assumption as given by very famous hypothesis Random Walk (Malkiel, B. G., 2003, Mankiw & Shapiro, 1985) and the Efficient Market (Jensen M. C., 1978) are stating that it is impossible to predict the nature of the stock market due to presence of randomness and nonlinearity in the dataset. These assumptions were verified by many different pursuance models in different interval of time (Atsalakis & Valavanis, 2009). The risk in investment into the stock market lies in the fact that the stock market price series are very dynamic, non-correlated, chaotic and noisy in nature. Therefore, the accurate prediction of stock prices is very crucial from the investor point of view as well as company point of view to maximize gains on the investments. Recent advancement in the field of soft computing has captured the attention of the researcher to analyze and predict the non-linear behavior of stock market in highly noisy environment.

Machine Learning Frameworks is usually deployed to forecast the price of the volatile stock market at the optimum level of the accuracy (Kumar et. al 2013a, 2013b). For this purpose, high frequency big data has been used for the experiments and estimation of accuracy. The volume, velocity, and variety of stock market datasets have tremendously increasing day by day. Therefore, it becomes the need of the day to develop a tool or a model to predict the behavior of the stock market under such a high volatility. The tree-based ensemble using machine learning techniques have achieved the popularity among the best available statistical model and the most efficient deep learning model.

This paper proposed an ensemble model that comprise of the Deep Learning, Gradient Boosting Machine (GBM) and Random Forest (RF) model. It is obvious from the literature that the Gradient Boosting Machine and Random Forest has already been combined to form an ensemble model. The performance of the proposed ensemble model is compared with the individual models as discussed below.

Deep learning is the well-known supervised machine learning model that provides generalization, training and stability with the stochastic big dataset. It is based on feed-forward neural architecture and results the highest prediction accuracy (Rusk N., 2015). In this study, a supervised deep learning model is used to optimize the predictive result.

Gradient Boosting Machine (GBM) model is an ensemble machine learning technique used to build predictive tree-based models (Friedman, J. H., 2002). Gradient boosting is an approach where new models are developed to predict the residuals or errors of prior models and then added together to make the final prediction.

Distributed Random Forest (DRF) has gained popularity as a powerful classification and regression tool to be used in stock market data analytics (Khaidem & Dey, 2016). DRF works by generating a classification forest of regression or classification trees as oppose to a single regression tree. Each tree individually is a weak learner and built a class of columns and rows. Variance can by optimized if the tree is more in number. The final value of the prediction is calculated by computing the average predicted values over all the trees.

In Ensemble machine learning methods, the multiple learning algorithms are used to obtain the enhanced prediction performance as compared to single learning algorithms (Zhang & Ma, 2012). Most of the latest popular prediction tools available in the literature use an ensemble technique for making the prediction. This study implements a unique ensemble of the most prominent models of Deep Learning, Gradient Boosting Machine (GBM) and Random Forecast (RF), where Gradient Boosting Machine (GBM) are both already an ensemble learners use boosting and bagging respectively. An ensemble model keeps a collection of weak learners and produces a single and strong learner model.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing