Most neural networks used today rely on rigid, fixed-architecture
networks and/or slow, gradient descent-based training algorithms
(e. g. backpropagation). In this paper, we propose a new neural
network learning architecture to counter these problems. Namely, we
combine (1) flexible cascade neural networks, which dynamically adjust
the size of the neural network as part of the learning process, and
(2) node-decoupled extended Kalman filtering (NDEKF), a fast
converging alternative to backpropagation. In this paper, we first
describe how learning proceeds in cascade neural networks. We then
show how NDEKF fits seamlessly into the cascade learning framework,
and how cascade learning addresses the poor local minima problem of
NDEKF reported in [Puskorius & Feldkamp, 1991]. We analyze the
computational complexity of our approach and compare it to
fixed-architecture training paradigms. Finally, we report learning
results for continuous function approximation and dynamic system
identification - results which show substantial improvement in
learning speed and error convergence over other neural network
training methods.
|