UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Standards for the control of algorithmic bias in the Canadian administrative context

Loading...
Thumbnail Image

Date

2022-08-30

Authors

Heisler, Natalie

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Governments around the world use machine learning in automated decision-making systems for a broad range of functions, including the administration and delivery of healthcare services, education, housing benefits; for surveillance; and, within policing and criminal justice systems. Algorithmic bias in machine learning can result in automated decisions that produce disparate impact, compromising Charter guarantees of substantive equality. The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. This thesis seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in automated decision-making? While acknowledging the contributions of leading standards development organizations, I argue that the rationale for standards must come from the law, and that implementing such standards would help not only to reduce future complaints, but more importantly would proactively enable human rights protections for those subject to automated decision-making. Drawing from the principles of administrative law, and the Supreme Court of Canada’s substantive equality decision in Fraser v. Canada (Attorney General), this research derives a proposed standards framework that includes: standards to mitigate the creation of biased predictions; standards for the evaluation of predictions; and, standards for the measurement of disparity in predictions. Recommendations are provided for implementing the proposed standards framework in the context of Canada’s Directive on Automated Decision-Making.

Description

Keywords

artificial intelligence, policy, administrative law, standards, machine learning, disparate impact, government, Directive on Automated Decision-Making, Charter of Rights and Freedoms, human rights, substantive equality, algorithmic bias

LC Keywords

Citation