UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Enhancing the Decoding Performance of Steady-State Visual Evoked Potentials based Brain-Computer Interface

Loading...
Thumbnail Image

Date

2019-08-14

Authors

Ravi, Aravind

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Non-invasive Brain-Computer Interfaces (BCIs) based on steady-state visual evoked potential (SSVEP) responses are the most widely used BCI. SSVEP are responses elicited in the visual cortex when a user gazes at an object flickering at a certain frequency. In this thesis, we investigate different BCI system design parameters for enhancing the detection of SSVEP such as change in inter-stimulus distance (ISD), EEG channels, detection algorithms and training methodologies. Closely placed SSVEP stimuli compete for neural representations. This influences the performance and limits the flexibility of the stimulus interface. In this thesis, we study the influence of changing ISD on the decoding performance of an SSVEP BCI. We propose: (i) a user-specific channel selection method and (ii) using complex spectrum features as input to a convolutional neural network (C-CNN) to overcome this challenge. We also evaluate the proposed C-CNN method in a user-independent (UI) training scenario as this will lead to a minimal calibration system and provide the ability to run inference in a plug-and-play mode. The proposed methods were evaluated on a 7-class SSVEP dataset collected on 21 healthy participants (Dataset 1). The UI method was also assessed on a publicly available 12-class dataset collected on 10 healthy participants (Dataset 2). We compared the proposed methods with canonical correlation analysis (CCA) and CNN classification using magnitude spectrum features (M-CNN). We demonstrated that the user-specific channel set (UC) is robust to change in ISD (viewing angles of 5.24ᵒ, 8.53ᵒ, and 12.23ᵒ) compared to the classic 3-channel set (3C - O1, O2, Oz) and 6-channel set (6C - PO3, PO4, POz, O1, O2, Oz). A significant improvement in accuracy of over 5% (p=0.001) and a reduction in variation of 56% (p=0.035) was achieved across ISDs using the UC set compared to the 3C set and 6C set. Secondly, the proposed C-CNN method obtained a significantly higher classification accuracy across ISDs and window lengths compared to M-CNN and CCA. The average accuracy of the C-CNN increased by over 12.8% compared to CCA and an increase of over 6.5% compared to the M-CNN for the closest ISD across all window lengths was achieved. Thirdly, the C-CNN method achieved the highest accuracy in both UD and UI training scenarios on both 7-class and 12-class SSVEP Datasets. The overall accuracies of the different methods for 1 s window length for Dataset 1 were: CCA: 69.1±10.8%, UI-M-CNN: 73.5±16.1%, UI-C-CNN: 81.6±12.3%, UD-M-CNN: 87.8±7.6% and UD-C-CNN: 92.5±5%. And for Dataset 2 were: CCA: 62.7±21.5%, UI-M-CNN: 70.5±22%, UI-C-CNN: 81.6±18%, UD-M-CNN: 82.8±16.7%, and UD-C-CNN: 92.3±11.1%. In summary, using the complex spectrum features, the C-CNN likely learned to use both frequency and phase related information to classify SSVEP responses. Therefore, the CNN can be trained independent of the ISD resulting in a model that generalizes to other ISDs. This suggests that the proposed methods are robust to changes in inter-stimulus distance for SSVEP detection and provides increased flexibility for user interface design of SSVEP BCIs for commercial applications. Finally, the UI method provides a virtually calibration free approach to SSVEP BCI.

Description

Keywords

brain-computer interfaces, convolutional neural networks, electroencephalography, steady-state visual evoked potential

LC Keywords

Citation