Xu, David2016-08-042016-08-042016-08-042016-07-21http://hdl.handle.net/10012/10612A predictive estimator (PE) is a neural microcircuit hypothesized to explain how the brain processes certain types of information. They participate in a hierarchy, passing predictions to lower layers, which send back prediction errors. Meanwhile, the network learns optimal connection weights in order to minimize the prediction errors. This two-way process has been used to hypothesize models for brain mechanisms, such as visual information processing. However, the standard implementation for a PE uses the same weight matrix for both feed-forward and feed-back projections, which is not biologically plausible. In this thesis, we investigate the predictive estimator using individual feed-forward and feed-back connection weights. We extend the model and introduce the Symmetric Predictive Estimator (SPE). We investigate the dynamics of a SPE network, analyze its stability, and define a general learning rule. A functional model of the SPE is implemented analytically, and in a neural framework using spiking neurons. Both implementations are built as Python modules that can accept generic, numerical inputs. With a series of general experiments, we demonstrate its ability to learn non-linear functions and perform a supervised-learning task. This variation on the PE may provide insights to the theory of predictive estimators and their role in the brain.enpredictive codingneural networkscomputational neurosciencepredictive estimatorsbiological plausibilityperceptual networkBiologically Plausible Neural Learning using Symmetric Predictive EstimatorsMaster Thesis