dc.description.abstract | In this thesis, a divided attention paradigm was used to infer the representational codes used by words and pictures in long-term memory. Semantically categorized lists of words (Expt. 1) or pictures (Expt. 2, 3, 4, and 5) were studied or retrieved while simultaneously making size judgments to another set of distractor words (Expt. 1 and 2) or pictures (Expt. 3, 4, and 5) presented concurrently. We manipulated (within-subjects) the semantic relatedness and visual similarity (Expt. 4 and 5) of distractor to target item. Recognition accuracy for words was poorer when distractors were semantically related to target items. Recognition accuracy for pictures was equivalent with semantically related and unrelated distractors, but poorer when picture distractors were both semantically related and visually similar to the target item. These findings suggest that long-term episodic memory for words and pictures both require access to semantically-based representations, but that picture memory also requires access to visuo-spatial representations for optimal performance. | en |