Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Backpropagation

Encyclopedia of Information Science and Technology, Fourth Edition
A learning method for ANNs that uses the relative error of a given output or set of outputs (epoch) that is propagated back through the ANN and used to adjust all weights on the connections between neurodes.
Published in Chapter:
Artificial Neural Networks
Steven Walczak (University of South Florida, USA)
Copyright: © 2018 |Pages: 12
DOI: 10.4018/978-1-5225-2255-3.ch011
Abstract
This chapter examines the history of artificial neural networks research through the present day. The components of artificial neural network architectures and both unsupervised and supervised learning methods are discussed. Although a step-by-step tutorial of how to develop artificial neural networks is not included, additional reading suggestions covering artificial neural network development are provided. The advantages and disadvantages of artificial neural networks for research and real-world applications are presented as well as potential solutions to many of the disadvantages. Future research directions for the field of artificial neural networks are presented.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Using Deep Learning and Big Data Analytics for Managing Cyber-Attacks
A short form for backward propagation of errors. It's used to train the chain rule method's neural network. In simple terms, this technique does a backward pass through a network after each feed-forward pass to update the model's parameters depending on weights and biases.
Full Text Chapter Download: US $37.50 Add to Cart
Thermal Design of Gas-Fired Cooktop Burners Through ANN
A supervised learning technique used for training artificial neural networks. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for “backwards propagation of errors”. Backpropagation requires that the transfer function used by the artificial neurons (or “nodes”) be differentiable.
Full Text Chapter Download: US $37.50 Add to Cart
Performance Comparison of Different Intelligent Techniques Applied on Detecting Proportion of Different Component in Manhole Gas Mixture
In Backpropagation algorithm error computed at output layer is back propagate to earlier layers to adjust synaptic weights such that sum squared error can be minimized to train neural network.
Full Text Chapter Download: US $37.50 Add to Cart
Comparing Deep Neural Networks and Gradient Boosting for Pneumonia Detection Using Chest X-Rays
Widely used in training neural networks, backpropagation is an algorithm to compute the gradient of the loss function of a neural network. Backpropagation computes the gradient of the loss function by using the chain rule. The gradient is computed one layer at a time and is iterated backward from the last layer.
Full Text Chapter Download: US $37.50 Add to Cart
Virtual Try-On With Generative Adversarial Networks: A Taxonomical Survey
Neural networks use this technique to propagate the error signal to each of its neurons and evaluate the individual contribution of each neuron to that error.
Full Text Chapter Download: US $37.50 Add to Cart
Is AI in Your Future?: AI Considerations for Scholarly Publishers
In machine learning, backpropagation is a widely used algorithm for training feedforward neural networks. Generalizations of backpropagation exist for other artificial neural networks, and for functions generally. These classes of algorithms are all referred to generically as “backpropagation.”
Full Text Chapter Download: US $37.50 Add to Cart
Neural Networks for Automobile Insurance Pricing
Method for computing the error gradient for a feedforward neural network.
Full Text Chapter Download: US $37.50 Add to Cart
Teeth and Landmarks Detection and Classification Based on Deep Neural Networks
The algorithm used in in artificial neural networks to calculate the gradient, the vector of partial derivatives, for further update of model parameters. The algorithm is based on the chain rule of derivation.
Full Text Chapter Download: US $37.50 Add to Cart
Dramatic Premise and Human Purpose: Has First Cause Intention and Democratic Rule of Law Been Trumped?
Is a learning process applied by neural nets to solve problems. The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous paper by Rumelhart, Hinton and Williams (1986) described several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR