Deep Learning-Enabled Edge Computing and IoT

Deep Learning-Enabled Edge Computing and IoT

Amuthan Nallathambi, Kannan Nova
DOI: 10.4018/978-1-6684-6275-1.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deep learning is a new approach to artificial intelligence that enables edge-computing systems to learn from data and take decisions without human intervention. Edge computing is a technique for coping with the increasing demand for streaming data. This is especially important in the case of applications that involve computationally intensive tasks such as driverless cars, autonomous drones, and smart cities. Edge computing is the provision of computing, big data analytics, and storage in such a way that the data comes to the processing power and not vice versa. It relies on a decentralized approach where computational resources are provided at the edge of networks. Edge computing is an emerging field that's getting attention from many vendors and researchers. The data generated by IoT devices is usually too large and complex for cloud-based storage and processing. That's why edge computing can handle data at the source of generation in real time, which speeds up the process of decision making.
Chapter Preview
Top

Introduction

In this chapter, we explore how deep learning enables edge computing applications at scale by predicting user preferences. Deep learning is an artificial intelligence algorithm which can learn from example data and improve its predictions. Deep learning enables edge computing applications at scale with real-time analytics. Deep learning is a subset of machine learning that use algorithms based on artificial neural networks to learn abstract concepts from data. Deep learning models have an architecture composed of many layers or modules, each designed to process and extract data from its inputs and pass it through multiple processing steps to generate outputs.

Deep learning models can be identified as following based on the number of layers in their architecture:

Single-layer networks: These models are composed of one layer with only one processing step. One layer includes a set of inputs and a single output. A single-layer network is an example of a perceptron model.

Multi-layer networks: These models have two or more layers in the architecture, each performing different tasks. The multiple hidden layers are introduced between input and output layers depends upon the optimization or hyper parameters. This can be achieved by means of a feed-forward architecture, whereby the output of one layer is input to the next layer, or a recurrent architecture with feedback. A multilayer network is an example of a deep neural network. The following diagrams illustrate three different types of multilayer networks: Feedforward multi-layer perceptron model: This is the simplest form of a multilayer network.

Recurrent neural network model: The recurrent neural network (RNN) model is an artificial intelligence technique used to process sequential data and produce a sequence of outputs where each successive value depends on the values of previous entries. This is an alternative to the backpropagation network This is a more sophisticated form of the feedforward architecture. They are trained using the standard batch training, but the time required for learning is reduced significantly compared to standard neural networks. This type of model is commonly used in machine learning to process time-based data. For instance, this kind of model can be used to predict the next word of a text or the next impulse (e.g., sound, gesture) that an actor makes in a video sequence.

Convolution Neural Network: A popular deep learning algorithm is called a Convolutional Neural Network. The Convolutional Neural Network works by using a lot of “neurons” and they work together to solve problems. It is used in image classification, image recognition, object detections, and autonomous self-driving cars.

Deep neural network model: This most complex and powerful type includes many layers, typically with multiple hidden or intermediate layers in each layer. As illustrated in the figure.1,

Figure 1.

Deep neural network model

978-1-6684-6275-1.ch004.f01

we have a network with two hidden layers. The input data is fed forward through the first hidden layer to output nodes for classification and prediction. The output from the input nodes is then fed to the second hidden layer and through it, to the output nodes. The output of these nodes is finally fed back to classification and prediction in a “backward pass” through the network after which an error function may be calculated.

The most important principle of neural networks is that a single neuron in one layer is connected to the output of neurons in the previous layer. The back propagation algorithm was largely developed by Geoffrey Hinton, who won the Turing Award in 2014. In its core, it consists of two steps: an error back propagation step and gradient descent optimization step. The error back propagation step iterates from the outputs of previously hidden nodes to calculate how much error is produced when the predicted output is compared to the actual active input.

Complete Chapter List

Search this Book:
Reset