Stochastic approximation of artificial neural network-type learning algorithms, a dynamical systems approach

Loading...
Thumbnail Image

Date

Authors

Ncube, Israel

Advisor

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Stochastic approximation is concerned with characterisation of the long term behaviour of recursive random algorithms. For example, does the algorithm converge to a unique fixed point, for all initial points? This problem is well-understood via the Kushner-Clark theorem, only if the so-called associated ordinary differential equation (ODE) has exactly on elocally asymptotically stable equilibrium point. In this case it is known that, under some fairly reasonable assumptions, the random algorithm converges, with probability one, to the equilibrium point of the ODE. However, if the ODE has multiple locally asymptotically stable equilibria, not much is currently known about convergence of the algorithm to any specific one of these equilibria. The primary objective of the thesis is the investigation of this problem, both qualitatively and quantitatively. We study random fields generated by discrete algorithms , and then draw relationships between dynamics on the continuous (associated ODE) and discrete phase spaces. A novel computer algorithm, which estimates probabilities of convergence of a simple discrete system to particular stable equilibria of the ODE, is introduced. Simulation results suggest that the probabilities so estimated are almost independent of the initialisation of the discrete system. We reformulate the analysis of evolution of densities of algorithms, under the action of the Frobenius-Perron operator, on a new space, i.e. the space of normalised positive distributions. Endowed with a suitable metric, it is shown that the resulting metric space is complete.

Description

LC Subject Headings

Citation