Show simple item record

dc.contributor.authorKarimi, Amir-Hossein
dc.date.accessioned2018-05-01 13:29:26 (GMT)
dc.date.available2018-08-30 04:50:07 (GMT)
dc.date.issued2018-05-01
dc.date.submitted2018-04-26
dc.identifier.urihttp://hdl.handle.net/10012/13220
dc.description.abstractThe story of this work is dimensionality reduction. Dimensionality reduction is a method that takes as input a point-set P of n points in R^d where d is typically large and attempts to find a lower-dimensional representation of that dataset, in order to ease the burden of processing for down-stream algorithms. In today’s landscape of machine learning, researchers and practitioners work with datasets that either have a very large number of samples, and or include high-dimensional samples. Therefore, dimensionality reduction is applied as a pre-processing technique primarily to overcome the curse of dimensionality. Generally, dimensionality reduction improves time and storage space required for processing the point-set, removes multi-collinearity and redundancies in the dataset where different features may depend on one another, and may enable simple visualizations of the dataset in 2-D and 3-D making the relationships in the data easy for humans to comprehend. Dimensionality reduction methods come in many shapes and sizes. Methods such as Principal Component Analysis (PCA), Multi-dimensional Scaling, IsoMaps, and Locally Linear Embeddings are amongst the most commonly used method of this family of algorithms. However, the choice of dimensionality reduction method proves critical in many applications as there is no one-size-fits-all solution, and special care must be considered for different datasets and tasks. Furthermore, the aforementioned popular methods are data-dependent, and commonly rely on computing either the Kernel / Gram matrix or the covariance matrix of the dataset. These matrices scale with increasing number of samples and increasing number of data dimensions, respectively, and are consequently poor choices in today’s landscape of big-data applications. Therefore, it is pertinent to develop new dimensionality reduction methods that can be efficiently applied to large and high-dimensional datasets, by either reducing the dependency on the data, or side-stepping it altogether. Furthermore, such new dimensionality reduction methods should be able to perform on par with, or better than, traditional methods such as PCA. To achieve this goal, we turn to a simple and powerful method called random projections. Random projections are a simple, efficient, and data-independent method for stably embedding a point-set P of n points in R^d to R^k where d is typically large and k is on the order of log n. Random projections have a long history of use in dimensionality reduction literature with great success. In this work, we are inspired to build on the ideas of random projection theory, and extend the framework and build a powerful new setup of random projections for large high-dimensional datasets, with comparable performance to state-of-the-art data-dependent and nonlinear methods. Furthermore, we study the use of random projections in domains other than dimensionality reduction, including prediction, and show the competitive performance of such methods for processing small dataset regimes.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectRandom Projectionsen
dc.subjectDimensionality Reductionen
dc.subjectNonlinear Random Projectionsen
dc.subjectBiologically Inspired Random Projectionsen
dc.subjectSupervised Random Projectionsen
dc.subjectDeep Learningen
dc.subjectRandom-weighted Neural Networksen
dc.titleExploring New Forms of Random Projections for Prediction and Dimensionality Reduction in Big-Data Regimesen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Mathematicsen
uws-etd.embargo.terms4 monthsen
uws.contributor.advisorGhodsi, Ali
uws.contributor.advisorWong, Alexander
uws.contributor.affiliation1Faculty of Mathematicsen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages