Stochastic Approximation Monte Carlo for MLP Learning

Stochastic Approximation Monte Carlo for MLP Learning

Faming Liang
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-59904-849-9.ch217
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Over the past several decades, multilayer perceptrons (MLPs) have achieved increased popularity among scientists, engineers, and other professionals as tools for knowledge representation. Unfortunately, there is no a universal architecture which is suitable for all problems. Even with the correct architecture, frustrating problems of connection weights training still remain due to the rugged nature of the energy landscape of MLPs. The energy function often refers to the sum-of-square error function for conventional MLPs and the negative logposterior density function for Bayesian MLPs. This article presents a Monte Carlo method that can be used for MLP learning. The main focus is on how to apply the method to train connection weights for MLPs. How to apply the method to choose the optimal architecture and to make predictions for future values will also be discussed, but within the Bayesian framework.
Chapter Preview
Top

Main Focus Of The Chapter

This article presents how the stochastic approximation Monte Carlo (SAMC) (Liang et al., 2007) algorithm can be used for MLP learning, including training, prediction and architecture selection.

Key Terms in this Chapter

Multiple Layer Perceptron (MLP): An important class of neural networks, which consists of a set of source nodes that constitute the input layer, one or more layers of computational nodes, and an output layer of computational nodes. The input signal propagates through the network in a forward direction, on a layer-by-layer basis

Simulated Annealing: A generic probabilistic meta-algorithm used to find true or approximate solutions to global optimization problems.

Genetic Algorithm: A search heuristic used in computing to find true or approximate solutions to global optimization problems.

Metropolis-Hastings Algorithm: A popular MCMC algorithm with the acceptance probability {1,[f(y)q(y,x)]/[f(x)q(x,y)]} for a new state y given the current state x, where f(·) is the target distribution and q(·,·) is the proposal distribution

Model Evidence: The log-marginal likelihood of the data obtained by integrating out the parameters over the space of models. Its value expresses the preference shown by the data for different models.

Markov Chain Monte Carlo (MCMC): A class of algorithms for sampling from probability distributions by simulating a Markov chain that has the desired distribution as its stationary distribution. The state of the Markov chain after a large number of steps is then used as a sample from the desired distribution.

Stochastic Approximation Algorithm: A probabilistic meta-algorithm suggested by Robbins and Monro (1951) for solutions of regression equations.

Complete Chapter List

Search this Book:
Reset