Article Preview
TopIntroduction
There has been a conundrum of the imbalanced data in the different spheres of the IoT (Internet of Things). Large variety of sensors connected in IoT produce a great amount of data which is crucial to understand (Sinha et al. 2017). Training machine learning algorithms over the raw data collected from real networks is inadequate due to the data being imbalanced (Zolanvari, Teixeira & Jain, 2018; Choi & Lee, 2018, Makki et al. 2019). For the past few years, Class Imbalance Problem (CIP) has been a major issue in machine learning and data mining. It is a problem where the instances of one class are in abundance, Majority class while other class is in dearth, Minority class (Wang & Yao, 2013; Somasundaram & Reddy, 2016). The issue usually arises when problems are based on classification (He & Garcia, 2009). For example, to attain higher accuracy and to minimize the error rate the classifier may classify all the samples into Majority class but clearly all the Minority class samples will be incorrectly classified. As a result, such classifiers would lead to extremely impressive accuracy while on the other hand, the values of evaluation measures like precision, recall etc. suffer. Hence, the constructed model suffers from an accuracy paradox. To overcome this problem, balancing the data from IoT is necessary. Nowadays, it is seen that CIP is common in a large number of diverse fields like fraud detection, anomaly detection, medical diagnosis, oil spillage, facial detection, and much more (Nagar et al. 2020, Yong, 2012). These could be the sources of imbalanced data in IoT which might come from an attack at a single device or a set of sensors connected over the network. In this paper, the focus is to handle the CIP in particularly two domains or spheres of IoT, viz. fraud detection and medical diagnosis. We have employed a total four datasets, two belonging to fraud detection domain and two belonging to the field of medicine. All the datasets are very diverse in terms of their sample sizes, the number and types of attributes and the imbalance ratio.
Imbalanced datasets from IoT can be processed using various strategies i.e. algorithm level learning (Ruff et al., 2017), cost sensitive learning (Nguyen, Gantner & Schimdt-Theime, 2010) and data level learning. The later one is the focus of this paper. Data level learning includes sampling algorithms which comprise of two major techniques, Oversampling and Undersampling. These techniques calibrate the class distribution of the dataset. Undersampling is the process that works on the Majority class by either reducing or removing some or many instances of Majority dataset without compromising with its classifying features (Hulse, Khoshgoftaar& Napolitano,2009). Hence, Majority class is modified and is made compatible with the Minority class. Oversampling is the technique in which the datasets of the Minority class are increased or amplified to such an extent so that its size becomes comparable to the Majority class. Therefore, the possibility of losing the useful classifying features is not there in Oversampling which might be there if Undersampling is used. Also, Oversampling deals with the decisive boundary points loss problem that can arise if Undersampling would be used for balancing the data. This is due to the fact that in Undersampling, the Majority dataset points are removed which leads to a possibility of loss of decisive boundary points even if the points are being removed using an efficient method. Also, Undersampling is trained on lesser data information and may not be able to consider all the cases. Due to these advantages of Oversampling over Undersampling, the authors aim to work on Oversampling techniques in this paper. There are various Oversampling techniques (Nguyen et al., 2010, Xiaolong et al. 2019) proposed in literature such as Random Oversampling (ROS), Synthetic Minority Oversampling Technique (SMOTE) (Chawla, Bowyer, Hall & Kegelmeyer, 2002; Chawla, Lazarevic, Hall & Bowyer, 2003), Adaptive Synthetic Oversampling technique (ADASYN) (He, Bai, Garcia & Li, 2008), SMOTEBOOST (Chawla et al., 2003), ADABOOST (Zhang et al., 2019) etc. One of the most commonly used technique in literature is SMOTE.