Agarwal, Sushant2020-05-142020-05-142020-05-142020-04-22http://hdl.handle.net/10012/15861Algorithms have increasingly been deployed to make consequential decisions, and there have been many ethical questions raised about how these algorithms function. Three ethical considerations we look at in this work are fairness, interpretability, and privacy. These concerns have received a lot of attention in the research community recently, but have primarily been studied in isolation. In this work, we look at cases where we want to satisfy multiple of these properties simultaneously, and analyse how they interact. The underlying message of this work is that these requirements come at a cost, and it is necessary to make trade-offs between them. We have two theoretical results to demonstrate this. The first main result shows that there is a tension between the requirements of fairness and interpretability of classifiers. More specifically, we consider a formal framework to build simple classifiers as a means to attain interpretability, and show that each simple classifier is strictly improvable, in the sense that every simple classifier can be replaced by a more complex classifier that strictly improves both fairness and accuracy. The second main result considers the issue of compatibility between fairness and differential privacy of learning algorithms. In particular, we prove an impossibility theorem which shows that even in simple binary classification settings, one cannot design an accurate learning algorithm that is both ε-differentially private and fair (even approximately, according to any reasonable notion of fairness).enmachine learningtheoryfairnessinterpretabilitydifferential privacyethicsalgorithmsTrade-Offs between Fairness, Interpretability, and Privacy in Machine LearningMaster Thesis