Weights Direct Determination of Feedforward Neural Networks without Iterative BP-Training

Weights Direct Determination of Feedforward Neural Networks without Iterative BP-Training

Yunong Zhang, Ning Tan
DOI: 10.4018/978-1-61520-757-2.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Artificial neural networks (ANN), especially with error back-propagation (BP) training algorithms, have been widely investigated and applied in various science and engineering fields. However, the BP algorithms are essentially gradient-based iterative methods, which adjust the neural-network weights to bring the network input/output behavior into a desired mapping by taking a gradient-based descent direction. This kind of iterative neural-network (NN) methods has shown some inherent weaknesses, such as, 1) the possibility of being trapped into local minima, 2) the difficulty in choosing appropriate learning rates, and 3) the inability to design the optimal or smallest NN-structure. To resolve such weaknesses of BP neural networks, we have asked ourselves a special question: Could neural-network weights be determined directly without iterative BP-training? The answer appears to be YES, which is demonstrated in this chapter with three positive but different examples. In other words, a new type of artificial neural networks with linearly-independent or orthogonal activation functions, is being presented, analyzed, simulated and verified by us, of which the neural-network weights and structure could be decided directly and more deterministically as well (in comparison with usual conventional BP neural networks).
Chapter Preview
Top

Introduction

Benefiting from parallel-processing nature, distributed storage, self-adaptive and self-learning abilities, artificial neural networks (ANN) have been investigated and applied widely in many scientific, engineering and practical fields, such as, classification and diagnosis (Hong & Tseng, 1991; Jia & Chong, 1995; Sadeghi, 2000; Wang & Li, 1991), image and signal processing (Steriti & Fiddy, 1993), control system design (Zhang & Wang, 2001, 2002), equations solving (Zhang, Jiang & Wang, 2002; Zhang & Ge, 2005; Zhang & Chen, 2008), robot inverse kinematics (Zhang, Ge & Lee, 2004), regression and identification (Zhang et al, 2008).

As we may realize, the feedforward neural network (FNN) based on the error back-propagation (BP) training algorithm or its variants is one of the most popular and important neural-network (NN) models, which has been involved in many theoretical analyses and real-world applications (Hong & Tseng, 1991; Jia & Chong, 1995; Rumelhart, McClelland & PDP Research Group 1986; Wang & Li, 1991; Yu, Chen & Cheng, 1993; Zhang et al, 2008). In particular, BP neural networks proposed in mid 1980s (or even earlier, in 1974) is a kind of multilayer feedforward neural network (Rumelhart, McClelland & PDP Research Group 1986; Zhang et al, 2008; Zhou & Kang, 2005), of which the error back-propagation algorithm could be summarized simply as

978-1-61520-757-2.ch010.m01
(1) where w denotes a vector or matrix of neural weights (and/or thresholds), k= 0, 1, 2, … denotes the iteration number during the training procedure, ∆w(k) denotes the weights-updating value at the kth iteration of the training procedure with η denoting the learning rate (or termed, learning step-size) which should be small enough, and finally we use E to denote the error function that monitors and control such a BP-training procedure.

Complete Chapter List

Search this Book:
Reset