Show simple item record

dc.contributor.authorLloyd, Erik
dc.date.accessioned2019-09-09 13:35:04 (GMT)
dc.date.available2019-09-09 13:35:04 (GMT)
dc.date.issued2019-09-09
dc.date.submitted2019-08-30
dc.identifier.urihttp://hdl.handle.net/10012/15025
dc.description.abstractCurrent prosthetic control systems explored in the literature that use pattern recognition can perform a limited number of pre-assigned functions, as they must be trained using muscle signals for every movement the user wants to perform. The goal of this study was to explore the development of a prosthetic control system that can classify both trained and novel gestures, for applications in commercial prosthetic arms. The first objective of this study was to evaluate the feasibility of three different algorithms in classifying raw sEMG data for both trained isometric gestures, and for novel isometric gestures that were not included in the training data set. The algorithms used were; a feedforward multi-layer perceptron (FFMLP), a stacked sparse autoencoder (SSAE), and a convolution neural network (CNN). The second objective is to evaluate the algorithms’ abilities to classify novel isometric gestures that were not included in the training data set, and to determine the effect of different gesture combinations on the classification accuracy. The third objective was to predict the binary (flexed/extended) digit positions without training the network using kinematic data from the participants hand. A g-tec USB Biosignal Amplifier was used to collect data from eight differential sEMG channels from 10 able-bodied participants. These participants performed 14 gestures including rest, that involved a variety of discrete finger flexion/extension tasks. Forty seconds of data were collected for each gesture at 1200 Hz from eight bipolar sEMG channels. These 14 gestures were then organized into 20 unique gesture combinations, where each combination consisted of a different sub-set of gestures used for training, and another sub-set used as the novel gestures, which were only used to test the algorithms’ predictive capabilities. Participants were asked to perform the gestures in such a way where each digit was either fully flexed or fully extended to the best of their abilities. In this way the digit positions for each gesture could be labelled with a value of zero or one depending on its binary positions. Therefore, the algorithms used could be provided with both input data (sEMG) and output labels without needing to record joint kinematics. The post processing analysis of the outputs for each algorithm was conducted using two different methods, these being all-or-nothing gesture classification (ANGC) and weighted digit gesture classification (WDGC). All 20 combinations were tested using the FFMLP, SSAE, and CNN using Matlab. For both analysis methods, the CNN outperformed the FFMLP and SSAE. Statistical analysis was not provided for the performance of novel gestures using ANGC method, as the data was highly skewed, and did not fall on a normal distribution due to the large number of zero valued classification results for most of the novel gestures. The FFMLP and SSAE showed no significant difference from one another for the trained ANGC method, but the FFMLP showed statistically higher performance than the SSAE for trained and novel WDGC results. The results indicate that the CNN was able to classify most digits with reasonable accuracy, and the performance varied between participants. The results also indicate that for some participants, this may be suitable for prosthetic control applications. The FFMLP and SSAE were largely unable to classify novel digit positions and obtained significantly lower performance accuracies for novel gestures for both analysis methods when compared to the CNN. Therefore, the FFMLP and SSAE algorithms do not seem to be suitable for prosthetic control applications using the proposed raw data input, and the output architecture.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectmyoelectric controlen
dc.subjectsurface electromyographyen
dc.subjectdeep learningen
dc.subjectprostheticen
dc.subjectmuti layer perceptronen
dc.subjectconvolution neural networken
dc.subjectstacked sparse autoencoderen
dc.titleApplications of Neural Networks in Classifying Trained and Novel Gestures Using Surface Electromyographyen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentSystems Design Engineeringen
uws-etd.degree.disciplineSystem Design Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws.contributor.advisorJiang, Ning
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages