Contributions of vision and haptics to grasping
Loading...
Date
2001
Authors
Dubrowski, Adam
Advisor
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
Four experiments are reported which examine the process of integration of vision and haptics during prehension movements. Prehension is defined as the act of reaching, grasping and lifting an object. While vision can be used to guide the hand towards the object, as well as shape finger aperture such that the object can be grasped, haptic information about the object mass is necessary to known how much force should be used for a successful lift. Reach and grasp formation and the force generation operate during different time phases. However, in order to initiate generation of grip force without haptic information, we must anticipate the mass of the object in advance. Several models of sensorimotor integration have been proposed that describe the control of grasp. The main feature of these models is that the system must anticipate the mass of the object based on other modalities, such as visino. The anticipatory programming of grip forces is based on memorial associations between pre and post contact characteristics of sensory information. However, these models fail to describe how such multimodal integration develops. The main purpose of this thesis is to characterize the formation and nature of the integration of visual and haptic information as it pertains to the generation and control of grip forces.
Experiment 1 aimed to describe prehension movements in the absence of haptics in a virtual environment. It has been shown that in such environments vision is important in hand transport and grasp formation in a similar way as when grasping real objects. Experiment 2 was concerned with the development of visual and haptic integration when both sources of information are present. The results suggest that the process of integration of vision and haptics when generating grasping movements is dependent on what cues were available during practice. Experiment 3 examined the integration of vision and haptics in the on-line control of grasp in a dynamic setting, where the participants were asked to intercept moving objects. The results showed that with practice, the visual, pre-contact information as well as the haptic, post-contact information can be combined to produce an anticipatory model of the apparent mass of the object as it is stopped and grasped by the fingers. At the same time, haptic information about object torque can be used in an on-line fashion. Thus both sources of information can be used concurrently to form a higher order representation of object behavior, and at the same time each sensory modality can contribute independently to the on-line control of grasp. Finally, in Experiment 4 the ability to use vision and haptics was assessed when there was a disruption to the motor system. More specifically, an individual with a unilateral basal ganglia damage due to a stroke was studied. It was shown that with damage to this part of the brain, the integration of these two sources of information is suppressed.
Collectively these studies show that the integration of vision and haptics is a flexible process. Although it has been previously suggested that visual information dominates over haptics when both are present, with training this dominance can be changed. Also, it seems that both sources of sensory information can be combined to form a higher order representation, as well as being used independently. Finally, the basal ganglia have been identified as an important neural structure in the process of sensory integration. These findings provide insight into the formation of internal models of object behavior. It is proposed that the integration of vision and haptics is guided by a weighting function that is dependent on error detection and correction during the movement.
Description
Keywords
Harvested from Collections Canada