CNN: A Fundamental Unit of New Age AI

CNN: A Fundamental Unit of New Age AI

DOI: 10.4018/978-1-6684-9576-6.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The field of artificial intelligence (AI) is very promising with the emergence of machine learning and deep learning algorithms. The rise of convolutional neural networks (CNN) is very propitious in deep learning as it is more accurate and powerful than previously known soft computational models like artificial neural networks (ANN) and recurrent neural networks (RNN). CNN is ANN with steroids. These soft computational models are inspired by biological models that give an approximate solution to image-driven pattern recognition problems. The near perfect precision of CNN models offers a better way to recognize patterns and solve real-world problems and provide approximate solutions which are near precision. These technologies have enhanced the trust with humans. Its results have been accepted widely. This chapter aims to provide brief information about CNNs by introducing and discussing recent papers published on this topic and the methods employed by them to recognize patterns. This chapter also aims to provide information regarding dos and don'ts while using different CNN models.
Chapter Preview
Top

Introduction

The human brain consists of more than 100 billion neurons which are interconnected to each other. These neurons are made up of tissues and chemicals which facilitate the functioning of the human brain and control many processes of the human body. Some of these processes occur involuntarily like breathing, blinking, and thinking. This collection of interconnected neurons (a single neuron is connected to up to 200,000 other neurons) works like a microprocessor, albeit 100 times faster. We are born with neural structures and many other neural structures are formed using experiences. Each neuron structure is perceived differently when a signal is passed to them. Images are one of the key components through which humans define an object. Although images are captured using the eyes it is the brain that analyzes them using neurons and provides features through which we determine the object(s) present in the image. Although scientists haven’t completely understood how biological neurons work, researchers have been successful in creating “artificial neurons”. Artificial neurons are abstractions of biological neurons. Structures of these artificial neurons are trained to perform useful functions. Using these artificial neurons, we can create neural networks with layers of neurons. Each neuron in the neural network has weights associated with it. These dynamic weights are frequently changed so that the algorithm can arrive at an equation upon which the solution is predicted, and an approximate solution is given as output. These neural networks can be trained to identify handwriting, tag objects in an image, improve survival rates of heart transplant recipients etc. The fields of applications are expanding as neural networks are used in various fields outside the realm of STEM fields. Neural networks are employed in the fields such as finance, medicine, business, literature and arts where it is extensively used to identify plagiarism in the field of arts (Chitra & Rajkumar, 2016).

There are many types of neural networks such as perceptron, feed-forward networks, and convolutional neural networks. Many other such types of neural networks exist and are being developed as we read this article. To keep this article brief and short we will be only discussing the neural networks. Perceptron Networks as shown in Figure 1 are the most basic neural networks. It consists of only one neuron and based on the input weights the activation function is applied to produce a binary output. It is the oldest neural network known and is not known to produce correct output. It is based on a supervised learning algorithm where tagged data is used and classified into binary classification.

Figure 1.

Perceptron network

978-1-6684-9576-6.ch003.f01

Feed-Forward Networks are types of neural networks which are used mainly in applications of speech and face recognition. ANN is one example of a feed-forward network. It consists of many layers of neurons and data flow is only forward, hence its name. There is an input layer, hidden layers, and an output layer as shown in Figure 2. In some models, hidden layers may not be present. The weight of the neurons is also static as data is unidirectional.

Figure 2.

Feed forward network

978-1-6684-9576-6.ch003.f02

Complete Chapter List

Search this Book:
Reset