Show simple item record

dc.contributor.authorXin, Lu
dc.date.accessioned2009-04-30 19:32:56 (GMT)
dc.date.available2009-04-30 19:32:56 (GMT)
dc.date.issued2009-04-30T19:32:56Z
dc.date.submitted2009-04-30
dc.identifier.urihttp://hdl.handle.net/10012/4369
dc.description.abstractEnsembles methods such as AdaBoost, Bagging and Random Forest have attracted much attention in the statistical learning community in the last 15 years. Zhu and Chipman (2006) proposed the idea of using ensembles for variable selection. Their implementation used a parallel genetic algorithm (PGA). In this thesis, I propose a stochastic stepwise ensemble for variable selection, which improves upon PGA. Traditional stepwise regression (Efroymson 1960) combines forward and backward selection. One step of forward selection is followed by one step of backward selection. In the forward step, each variable other than those already included is added to the current model, one at a time, and the one that can best improve the objective function is retained. In the backward step, each variable already included is deleted from the current model, one at a time, and the one that can best improve the objective function is discarded. The algorithm continues until no improvement can be made by either the forward or the backward step. Instead of adding or deleting one variable at a time, Stochastic Stepwise Algorithm (STST) adds or deletes a group of variables at a time, where the group size is randomly decided. In traditional stepwise, the group size is one and each candidate variable is assessed. When the group size is larger than one, as is often the case for STST, the total number of variable groups can be quite large. Instead of evaluating all possible groups, only a few randomly selected groups are assessed and the best one is chosen. From a methodological point of view, the improvement of STST ensemble over PGA is due to the use of a more structured way to construct the ensemble; this allows us to better control over the strength-diversity tradeoff established by Breiman (2001). In fact, there is no mechanism to control this fundamental tradeoff in PGA. Empirically, the improvement is most prominent when a true variable in the model has a relatively small coefficient (relative to other true variables). I show empirically that PGA has a much higher probability of missing that variable.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectStochastic Stepwiseen
dc.subjectEnsembleen
dc.subjectParallel Genetic Algorithmen
dc.subjectVariable Selectionen
dc.subjectstatistical learningen
dc.titleStochastic Stepwise Ensembles for Variable Selectionen
dc.typeMaster Thesisen
dc.pendingfalseen
dc.subject.programStatisticsen
uws-etd.degree.departmentStatistics and Actuarial Scienceen
uws-etd.degreeMaster of Mathematicsen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages