UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

The Contribution of Visual & Somatosensory Input to Target Localization During the Performance of a Precision Grasping & Placement Task

Loading...
Thumbnail Image

Date

2017-01-31

Authors

Tugac, Naime

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Objective: Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is currently unknown whether depth information from another modality can improve depth perception during action planning and execution. Therefore, the goal of this thesis was to assess whether somatosensory input improves target localization during the performance of a precision placement task. It was hypothesized that somatosensory input regarding target location will improve task performance. Methods: Thirty visually normal participants performed a bead-threading task with their right hand during binocular and monocular viewing. Upper limb kinematics and eye movements were recorded using the Optotrak and EyeLink 2 while participants picked up the beads and placed them on a vertical needle. In study 1, somatosensory and visual feedback provided input about needle location (i.e., participants could see their left hand holding the needle). In study 2, only somatosensory feedback was provided (i.e., view of the left hand holding the needle was blocked, and practice trials were standardized). The main outcome variables that were examined were placement time, peak acceleration, and mean position and variability of the limb along the trajectory. A repeated analysis of variance with 2 factors, Viewing Condition (binocular/left eye monocular/right eye monocular) and Modality (vision/somatosensory) was used to test the hypothesis. Results: Results from study 1 were in accordance with our hypothesis, showing a significant interaction between viewing condition and modality for placement time (p=0.0222). Specifically, when somatosensory feedback was provided, placement time was >150 ms shorter in both monocular viewing conditions compared to the vision only condition. In contrast, somatosensory feedback did not significantly affect placement time during binocular viewing. There was no evidence to support that motor planning was improved when somatosensory input about end target location was provided. Limb trajectory showed a deviation toward needle location along azimuth at various kinematic markers during movement execution when somatosensory feedback was provided. Results from study 2 showed a main effect of modality for placement time (p=0.0288); however, the interaction between modality and vision was not significant. The results also showed that somatosensory input was associated with faster movement times and higher peak accelerations. Similar to study one, limb trajectory showed a deviation toward needle location at various kinematic markers during movement execution when somatosensory feedback was provided. Conclusions: This study demonstrated that information from another modality can improve planning and execution of reaching movements under certain conditions. It may be that the role of somatosensory input is not as effective when practice is not administered. It is important to note that despite the improved performance when somatosensory input was provided, performance did not reach the same level as was found during binocular viewing. These findings provide new knowledge about multisensory integration during the performance of a high precision manual task, and this information can be useful when designing new training regimens for people with abnormal binocular vision.

Description

Keywords

Binocular Vision, Multisensory Integration, Prehension

LC Keywords

Citation