Making Decisions with Incomplete and Inaccurate Information

Loading...
Thumbnail Image

Date

2021-08-25

Authors

Menon, Vijay

Advisor

Larson, Kate

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

From assigning students to public schools to arriving at divorce settlements, there are many settings where preferences expressed by a set of stakeholders are used to make decisions that affect them. Due to its numerous applications, and thanks to the range of questions involved, such settings have received considerable attention in fields ranging from philosophy to political science, and particularly from economics and, more recently, computer science. Although there exists a significant body of literature studying such settings, much of the work in this space make the assumption that stakeholders provide complete and accurate preference information to such decision-making procedures. However, due to, say, the high cognitive burden involved or privacy concerns, this may not always be feasible. The goal of this thesis is to explicitly address these limitations. We do so by building on previous work that looks at working with incomplete information, and by introducing solution concepts and notions that support the design of algorithms and mechanisms that can handle incomplete and inaccurate information in different settings. We present our results in two parts. In Part I we look at decision-making in the presence of incomplete information. We focus on two broad themes, both from the perspective of an algorithm or mechanism designer. Informally, the first one studies the following question: Given incomplete preferences, how does one design algorithms that are `robust', i.e., ones that produce solutions that are ``good'' with respect to the underlying complete preferences? We look at this question in context of two well-studied problems, namely, i) (a version of) the two-sided matching problem and ii) (a version of) the facility location problem, and show how one can design approximately-robust algorithms in such settings. Following this, we look at the second theme, which considers the following question: Given incomplete preferences, how can one ask the agents for some more information in order to aid in the design of `robust' algorithms? We study this question in the context of the one-sided matching problem and show how even a very small amount of extra information can be used to get much better outcomes overall. In Part II we turn our attention to decision-making in the presence of inaccurate information and look at the following question: How can one design `stable' algorithms, i.e., ones that do not produce vastly different outcomes as long as there are only small inaccuracies in a stakeholder's report of their preferences? We study this in the context of fair allocation of indivisible goods among two agents and show how, in contrast to popular fair allocation algorithms, there are alternative algorithms that are fair and approximately-stable.

Description

Keywords

computational social choice, algorithmic game theory

LC Keywords

Citation