Ananthakrishnan, Nivasini2021-07-202021-07-202021-07-202021-06-10http://hdl.handle.net/10012/17153Quantifying the probability of a label prediction being correct on a given test point or a given sub-population enables users to better decide how to use and when to trust machine learning derived predictors. In this work, combining aspects of prior work on conformal predictions and selective classification, we provide a unifying framework for confidence requirements that allows for distinguishing between various sources of uncertainty in the learning process as well as various region specifications. We then consider a set of common prior assumptions on the data generation process and show how these allow learning justifiably trusted predictors.enIdentifying regions of trusted predictionMaster Thesis