UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Applications of Neural Networks in Classifying Trained and Novel Gestures Using Surface Electromyography

Loading...
Thumbnail Image

Date

2019-09-09

Authors

Lloyd, Erik

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Current prosthetic control systems explored in the literature that use pattern recognition can perform a limited number of pre-assigned functions, as they must be trained using muscle signals for every movement the user wants to perform. The goal of this study was to explore the development of a prosthetic control system that can classify both trained and novel gestures, for applications in commercial prosthetic arms. The first objective of this study was to evaluate the feasibility of three different algorithms in classifying raw sEMG data for both trained isometric gestures, and for novel isometric gestures that were not included in the training data set. The algorithms used were; a feedforward multi-layer perceptron (FFMLP), a stacked sparse autoencoder (SSAE), and a convolution neural network (CNN). The second objective is to evaluate the algorithms’ abilities to classify novel isometric gestures that were not included in the training data set, and to determine the effect of different gesture combinations on the classification accuracy. The third objective was to predict the binary (flexed/extended) digit positions without training the network using kinematic data from the participants hand. A g-tec USB Biosignal Amplifier was used to collect data from eight differential sEMG channels from 10 able-bodied participants. These participants performed 14 gestures including rest, that involved a variety of discrete finger flexion/extension tasks. Forty seconds of data were collected for each gesture at 1200 Hz from eight bipolar sEMG channels. These 14 gestures were then organized into 20 unique gesture combinations, where each combination consisted of a different sub-set of gestures used for training, and another sub-set used as the novel gestures, which were only used to test the algorithms’ predictive capabilities. Participants were asked to perform the gestures in such a way where each digit was either fully flexed or fully extended to the best of their abilities. In this way the digit positions for each gesture could be labelled with a value of zero or one depending on its binary positions. Therefore, the algorithms used could be provided with both input data (sEMG) and output labels without needing to record joint kinematics. The post processing analysis of the outputs for each algorithm was conducted using two different methods, these being all-or-nothing gesture classification (ANGC) and weighted digit gesture classification (WDGC). All 20 combinations were tested using the FFMLP, SSAE, and CNN using Matlab. For both analysis methods, the CNN outperformed the FFMLP and SSAE. Statistical analysis was not provided for the performance of novel gestures using ANGC method, as the data was highly skewed, and did not fall on a normal distribution due to the large number of zero valued classification results for most of the novel gestures. The FFMLP and SSAE showed no significant difference from one another for the trained ANGC method, but the FFMLP showed statistically higher performance than the SSAE for trained and novel WDGC results. The results indicate that the CNN was able to classify most digits with reasonable accuracy, and the performance varied between participants. The results also indicate that for some participants, this may be suitable for prosthetic control applications. The FFMLP and SSAE were largely unable to classify novel digit positions and obtained significantly lower performance accuracies for novel gestures for both analysis methods when compared to the CNN. Therefore, the FFMLP and SSAE algorithms do not seem to be suitable for prosthetic control applications using the proposed raw data input, and the output architecture.

Description

Keywords

myoelectric control, surface electromyography, deep learning, prosthetic, muti layer perceptron, convolution neural network, stacked sparse autoencoder

LC Keywords

Citation