Fishbein, Jonathan Michael2008-07-102008-07-102008-07-102008http://hdl.handle.net/10012/3819Current representation schemes for automatic text classification treat documents as syntactically unstructured collections of words (Bag-of-Words) or `concepts' (Bag-of-Concepts). Past attempts to encode syntactic structure have treated part-of-speech information as another word-like feature, but have been shown to be less effective than non-structural approaches. We propose a new representation scheme using Holographic Reduced Representations (HRRs) as a technique to encode both semantic and syntactic structure, though in very different ways. This method is unique in the literature in that it encodes the structure across all features of the document vector while preserving text semantics. Our method does not increase the dimensionality of the document vectors, allowing for efficient computation and storage. We present the results of various Support Vector Machine classification experiments that demonstrate the superiority of this method over Bag-of-Concepts representations and improvement over Bag-of-Words in certain classification contexts.enHolographic Reduced RepresentationsVector Space ModelText ClassificationParts of Speech TaggingRandom IndexingSupport Vector MachinesSyntactic StructureSemanticsIntegrating Structure and Meaning: Using Holographic Reduced Representations to Improve Automatic Text ClassificationMaster ThesisSystem Design Engineering