Standards for the control of algorithmic bias in the Canadian administrative context
MetadataShow full item record
Governments around the world use machine learning in automated decision-making systems for a broad range of functions, including the administration and delivery of healthcare services, education, housing benefits; for surveillance; and, within policing and criminal justice systems. Algorithmic bias in machine learning can result in automated decisions that produce disparate impact, compromising Charter guarantees of substantive equality. The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. This thesis seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in automated decision-making? While acknowledging the contributions of leading standards development organizations, I argue that the rationale for standards must come from the law, and that implementing such standards would help not only to reduce future complaints, but more importantly would proactively enable human rights protections for those subject to automated decision-making. Drawing from the principles of administrative law, and the Supreme Court of Canada’s substantive equality decision in Fraser v. Canada (Attorney General), this research derives a proposed standards framework that includes: standards to mitigate the creation of biased predictions; standards for the evaluation of predictions; and, standards for the measurement of disparity in predictions. Recommendations are provided for implementing the proposed standards framework in the context of Canada’s Directive on Automated Decision-Making.
Cite this version of the work
Natalie Heisler (2022). Standards for the control of algorithmic bias in the Canadian administrative context. UWSpace. http://hdl.handle.net/10012/18677