Assessing Hyper Parameter Optimization and Speedup for Convolutional Neural Networks

Assessing Hyper Parameter Optimization and Speedup for Convolutional Neural Networks

Sajid Nazir, Shushma Patel, Dilip Patel
DOI: 10.4018/IJAIML.2020070101
Article PDF Download
Open access articles are freely available for download

Abstract

The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures.
Article Preview
Top

Introduction

The growth in Internet of Things (IoT) (Bubley, 2016), and emergence of social, web and mobile applications have provided access to large image datasets as a result of a move away from text based to visual communications. This coupled with the advances in storage and processing technologies has made it possible to progress from image processing to interpreting images for extracting contextual information. Artificial Intelligence (AI) aims to endow machines with similar capabilities of learning, perception and reasoning as that of a human. The question, ‘Can machines think?’ was posed in 1950 (Turing, 1950) through an ‘imitation game.’ Challenges of AI remain, despite substantial progress in learning algorithms (Bengio, 2009). Machine learning is a sub-field of AI that makes it possible for computers to learn without explicitly being programmed (Neetesh, 2017). Machine learning for vision problems comprises techniques that can provide intelligent solutions to complex problems of interpreting and describing a scene, given sufficient data. Much progress has been made in this area, but improvements are needed. One technique that has risen to predominance recently is Artificial Neural Network (ANN) that was inspired by biological neuron interconnections and activations of human brain (Deep Learning tutorial, 2015).

Deep learning, a branch of machine learning (Bhandare & Kaur, 2018) that derives its name from neural networks that comprise of many layers. Multiple layers are used to model high-level features from complex data, with each successive layer using the outputs from the preceding layer as an input (Benuwa, 2016). An overview of deep learning techniques with a focus on convolutional neural networks (CNNs) and deep belief networks (DBNs) is provided together with a discussion on sparsity and dimensionality reduction (Arel, 2010). Benuwa (2016) review deep learning techniques along with algorithm principles and architectures for deep learning. A review of recent advances in deep learning is provided in (Minar, 2018) as well as taxonomy of deep learning techniques and applications. A review of deep supervised learning, unsupervised learning, and reinforcement learning is provided in (Schmidhuber, 2015) covering developments since 1940.

The aim of training neural networks is to find weightings that achieve better classification accuracy (Nguyen, 2018). These networks require a lot of time, processing power, and data in order to be trained. After training, a neural network can be used to make better predictions on test data (Neetesh, 2017). Deep learning algorithms are complex to develop, train and evaluate. A neural net (Krizhevsky, Sutskever, & Hinton, 2012) with 60 million parameters and 650,000 neurons took a long time to train on ImageNet (Deng et al., 2009), in order to classify 1.2 million images. The increased research interest in neural networks is due to the promising results obtained for ImageNet competitions (Krizhevsky et al., 2012). CNN, the leading type of neural networks have been used for classifying large image datasets (Krizhevsky et al., 2012; Szegedy et al., 2014). The application of deep learning for different medical image modalities is provided in (Shen, Wu, & Suk 2017).

CNNs have also been applied for combining image information over a long duration video of up to two minutes (120 frames) to solve classification problem (Ng et al., 2015). A dynamically trained CNN was proposed for object classification in video streams (Yaseen, Anjum, Rana, & Antonopoulos 2019). The image features from hidden layers of deep neural networks were extracted for image recognition in (Hayakawa, Oonuma, & Kobayashi 2017).

Although the fields of artificial intelligence and deep learning are very promising, the techniques are deeply rooted in probabilistic foundations. An important aspect of the neural networks performance is the hyper parameters or the model parameters, and their impact on results. This aspect is critical to designing and developing efficient models. CNN architectures are dependent on hyper parameters and an incorrect choice can have a huge effect on performance (Albelwi & Mahmood, 2016).

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024)
Volume 12: 2 Issues (2022)
Volume 11: 2 Issues (2021)
Volume 10: 2 Issues (2020)
Volume 9: 2 Issues (2019)
View Complete Journal Contents Listing