The Libraries will be performing routine maintenance on UWSpace on October 13th, 2025, from 8 - 9 am ET. UWSpace will be unavailable during this time. Service should resume by 9 am ET.
 

An Optimal Knowledge Retention Framework for Continual Learning in Data Stream Scenarios

dc.contributor.authorHosseinzadeh, Arvin
dc.date.accessioned2025-09-12T14:36:36Z
dc.date.available2025-09-12T14:36:36Z
dc.date.issued2025-09-12
dc.date.submitted2025-09-11
dc.description.abstractIn the field of time series and data stream analysis, neural networks (NNs) have demonstrated excellent performance in predicting current and future states of dynamic systems. However, forgetting a previously learned information by NN when training the model on new data can be a significant challenge in having a reliable prediction, a problem that is known as catastrophic forgetting (CF) in neural networks. Unfortunately, retraining the model with both historical and new data is often impractical due to computational complexity and storage constraints, particularly in large-scale applications. One of the most prominent examples is automotive systems, where dynamic environments, such as changing road conditions or driving scenarios, require continuously updating the existing information based on new data. The main objective of this thesis is to propose a continual learning method that can efficiently train a neural network model on newly collected data while preserving previously acquired information. A novel framework based on memory-based continual learning approaches is developed, consisting of two critical tasks: optimal sampling of the old data to store in memory, and optimization. First, the proposed method aims to identify the most relevant and informative memories for old dataset, which are then contributed in future learning to preserve the previously learned information. The proposed method is developed in both univariate and multivariate time series prediction scenarios. Second, a proper optimization technique is used in each training epoch to minimize the loss function by modifying the network parameters, ensuring that NN is capable of successfully integrating new input while maintaining historical information. Additionally, a hybrid state estimation framework is introduced, leveraging the selected memory points to detect distribution shifts in real-time within the incoming data stream. When the estimator detects unfamiliar patterns that may degrade the predictive performance of the neural network, it adaptively switches to a model-based estimator to ensure robust and reliable estimation under the newly encountered conditions. A variety of neural network models and architectures are explored and compared to provide a comprehensive analysis and to evaluate their effectiveness in state estimation tasks. Furthermore, uncertainty analysis is conducted using conformal prediction, enabling quantification of the neural network’s predictive uncertainty after training on each task and comparison to a conventional batch learning baseline. The proposed framework is applied to both univariate and multivariate scenarios for estimating vehicle longitudinal and lateral velocities, incorporating new driving maneuvers into the previously trained neural network model. Experimental datasets comprise of sensor measurements from an electric Equinox vehicle. The effectiveness of the method is evaluated by examining the performance of the model in training on new information as well as the impact of forgetting on previously acquired knowledge as new tasks are incrementally introduced. The findings of this study suggest that the developed continual learning framework is capable of efficiently training the model on new data while preserving the prediction accuracy on previous data. The time efficiency of the proposed method is an important advantage, as it enables the neural network to adapt to new tasks quickly without a significant computational overhead.
dc.identifier.urihttps://hdl.handle.net/10012/22409
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectstate estimation
dc.subjecttime series analysis
dc.subjecttime series forecasting
dc.subjectcontinual learning
dc.subjectcatastrophic forgetting
dc.subjectvehicle state estimation
dc.subjectmachine learning
dc.titleAn Optimal Knowledge Retention Framework for Continual Learning in Data Stream Scenarios
dc.typeDoctoral Thesis
uws-etd.degreeDoctor of Philosophy
uws-etd.degree.departmentMechanical and Mechatronics Engineering
uws-etd.degree.disciplineMechanical Engineering
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0
uws.comment.hiddenI made a mistake in submitting the previous document. Please consider this document as my final thesis.
uws.contributor.advisorKhajepour, Amir
uws.contributor.advisorChenouri, Shojaeddin
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Hosseinzadeh_Arvin.pdf
Size:
8.1 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: