UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Training Reject-Classifiers for Out-of-distribution Detection via Explicit Boundary Sample Generation

dc.contributor.authorVernekar, Sachin
dc.date.accessioned2020-01-24T20:07:31Z
dc.date.available2020-01-24T20:07:31Z
dc.date.issued2020-01-24
dc.date.submitted2020-02-23
dc.description.abstractDiscriminatively trained neural classifiers can be trusted only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called “confident-classifier” by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the predictive distribution of OOD samples in the low-density “boundary” of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this work, we analyze this setting both theoretically and experimentally. We also propose a novel algorithm to generate the “boundary” OOD samples to train a classifier with an explicit “reject” class for OOD samples. We show that this approach is effective in reducing high-confident miss-predictions on OOD samples while maintaining the test-error and high-confidence on the in-distribution samples compared to standard training. We compare our approach against several recent classifier-based OOD detectors including the confident-classifiers on MNIST and FashionMNIST datasets. Overall the proposed approach consistently performs better than others across most of the experiments.en
dc.identifier.urihttp://hdl.handle.net/10012/15582
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://github.com/sverneka/ICLR2020en
dc.subjectOut-of-distribution detectionen
dc.subjectReject-Classifieren
dc.subjectVariational Autoencoderen
dc.subjectManifolden
dc.subject.lcshMachine learningen
dc.subject.lcshNeural networks (Computer science)en
dc.titleTraining Reject-Classifiers for Out-of-distribution Detection via Explicit Boundary Sample Generationen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws.comment.hiddenThe initial title was "A Classifi er-based Approach for Out-of-distribution Detection" and it was changed to "Training Reject-Classifiers for Out-of-distribution Detection via Explicit Boundary SampleGeneration"en
uws.contributor.advisorCzarnecki, Krzysztof
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Vernekar_Sachin.pdf
Size:
2.69 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: