A CONVblock for Convolutional Neural Networks

A CONVblock for Convolutional Neural Networks

Hmidi Alaeddine, Malek Jihene
Copyright: © 2021 |Pages: 14
DOI: 10.4018/978-1-7998-5071-7.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The reduction in the size of convolution filters has been shown to be effective in image classification models. They make it possible to reduce the calculation and the number of parameters used in the operations of the convolution layer while increasing the efficiency of the representation. The authors present a deep architecture for classification with improved performance. The main objective of this architecture is to improve the main performances of the network thanks to a new design based on CONVblock. The proposal is evaluated on a classification database: CIFAR-10 and MNIST. The experimental results demonstrate the effectiveness of the proposed method. This architecture offers an error of 1.4% on CIFAR-10 and 0.055% on MNIST.
Chapter Preview
Top

Introduction

In recent years, Convolutional Neural Networks (CNN) is a well-known deep learning architecture. It has achieved very good performance on various issues, such as visual recognition and detection. In 1959, Hubel and Wiesel (D. H. Hubel, 1968) discovered that animal visual cortex cells were responsible for detecting light in the receiving fields. Inspired by this discovery, in 1980, Fukushima proposed the neocognitron (K. Fukushima, 1982), which could be considered as the first work that deserves to be depth. After 10 years, Yann leCun et al. (B. B. Le Cun, 1989) proposed a network of artificial neurons multilayer capable of classifying the handwritten digits. This network is called LeNet-5. The LeNet-5 integrates several layers (Convolution, Pooling, Fully connected) and can be trained with the back propagation algorithm (Hecht-Nielsen, 1988). Since 2006, many methods have been proposed to overcome the difficulties encountered in the formation of deep CNN (X.-X. Niu, 2012),(O. Russakovsky, 2015), (K. Simonyan, 2015), (C. Szegedy, 2015). Since AlexNet's (Alex Krizhevsky, 2012) success in 2012, several enhancements have been made to CNN's convolution layers. However, various proposals have been made to this layer to improve its representational capacity. Convolution using filters whose weights are automatically learned during training extract different characteristics. There is a wide variety of filters that can be chosen for convolution. The purpose of the convolution layer is to learn the representations of the input features. Convolution techniques are proposed for convolution management in the other two directions (height and width), as well as arithmetic convolution . A 1×1 convolution is described as a solution of treatment with depth in convolution. The advantages of this type of convolution are the reduction of dimensionality for efficient calculations, pooling of functionalities and also include the use of new non-linearity such as ReLU after convolution. This convolution layer is equivalent to the parametric grouping operation between channels that is followed by ReLU (V. Nair, 2010) . The convolution of size 1×1 has been proposed in Network in Network (NiN) (M. Lin, 2014). It was then heavily exploited in the Google Inception architecture (C. Szegedy, 2015). Szegedy et al. (C. Szegedy, 2015), which can be considered as a logical outcome of NIN (M. Lin, 2014), introduce the Inception module. They use variable filter sizes to capture different visual models of different sizes. The Inception module integrates layers of convolution that are placed before the convolutions of sizes 3×3 and 5×5 for reducing the depth and width of CNN. This solution has reduced the parameters of this network (C. Szegedy, 2015) up to 5 million, which is much lower than AlexNet (Alex Krizhevsky, 2012) (60 million) and ZFNet (M. D. Zeiler, 2014) (75 million). In Deep Learning, several architectures, CNN (Convolutional Neural Network), take their name from the convolutional layer. Diluted CNN (F. Yu, 2016) is a CNN proposal that introduces an additional hyper-parameter into the suitable layer. This extra hyper-parameter l (dilation rate) indicates how much we want to expand the kernel. CNN dilated performed interesting performances in different tasks such as speech synthesis (Aaron van den Oord, 2016) and synthesis and speech recognition (T. Sercu, 2016). The technique of diluted convolution has been introduced in (F. Yu, 2016) and (L.-C. Chen, 2015). The dilated convolutions “inflate” the size of the network's reception field and allow it to cover more suitable information, thanks to insertion by inserting spaces between the elements of the kernels (there are usually l-1 spaces inserted between the elements of the kernel). The mechanism of weight sharing in CNNs can play an important role in reducing the number of parameters. Tiled CNN (J. Ngiam, 2010) is a CNN variation of tiles and multi-feature maps to learn invariant rotation and scale characteristics. Their experiments conducted by (J. Ngiam, 2010). In (Z. Wang, 2015), Wang et al. found that Tiled CNN generates higher results than traditional CNN (. Zheng, 2014) for small time series databases. For many network architectures, it is desired to perform transformations going in the opposite direction to a normal convolution, It can be seen as the backward step of a traditional convolution. It is also known as de-convolution (M. D. Zeiler, 2014), (M. D. Zeiler D. K., 2010), (M. D. Zeiler G. W., 2011), (J. Long, 2017) and fractional convolution (F. Visin, 2015). De-convolution combines a single activation with several output activations. Unlike the normal convolution that links several input activations to a single activation, the de-convolution has been widely used for visualization (M. D. Zeiler G. W., 2011), recognition (C. Cao, 2015), (J. Zhang, 2016), (Y. Zhang, 2016) semantic segmentation (H. Noh, 2015), and localization (B. Zhou, 2016).

Key Terms in this Chapter

s: Stride.

NB: Number of convolution block.

NNB: Number of local convolution block.

SP: Size picture.

p: Padding.

BN: Batch normalization layer.

L x C: The filter size.

SP0: Size of picture of dataset.

Complete Chapter List

Search this Book:
Reset