Fine-tuning and the stability of recurrent neural networks

dc.contributor.authorMacNeil, David
dc.contributor.authorEliasmith, Chris
dc.date.accessioned2025-07-03T18:09:11Z
dc.date.available2025-07-03T18:09:11Z
dc.date.issued2011
dc.description© 2011 MacNeil, Eliasmith. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
dc.description.abstractA central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.
dc.description.sponsorshipNatural Science and Engineering Research Council of Canada (NSERC), Discovery Grant || NSERC, Student Research Award (USRA) || Canada Research Chair Program || Canadian Foundation for Innovation (CFI) || Ontario Innovation Trust (OIL), Leaders Opportunity Fund.
dc.identifier.urihttps://doi.org/10.1371/journal.pone.0022885
dc.identifier.urihttps://hdl.handle.net/10012/21951
dc.language.isoen
dc.publisherPublic Library of Science (PLOS)
dc.relation.ispartofseriesPLOS One; 6(9); e22885
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectintegrators
dc.subjectneurons
dc.subjectneuronal tuning
dc.subjecteye movements
dc.subjecteyes
dc.subjectneural networks
dc.subjectvector spaces
dc.subjectlearning
dc.titleFine-tuning and the stability of recurrent neural networks
dc.typeArticle
dcterms.bibliographicCitationMacNeil, D., & Eliasmith, C. (2011). Fine-tuning and the stability of recurrent neural networks. PLoS ONE, 6(9). https://doi.org/10.1371/journal.pone.0022885
uws.contributor.affiliation1Faculty of Science
uws.contributor.affiliation2Centre for Theoretical Neuroscience (CTN)
uws.peerReviewStatusReviewed
uws.scholarLevelFaculty
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
file (16).pdf
Size:
1.5 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.47 KB
Format:
Item-specific license agreed upon to submission
Description: