Standards for the control of algorithmic bias in the Canadian administrative context
dc.contributor.author | Heisler, Natalie | |
dc.date.accessioned | 2022-08-30T17:29:22Z | |
dc.date.available | 2022-08-30T17:29:22Z | |
dc.date.issued | 2022-08-30 | |
dc.date.submitted | 2022-08-05 | |
dc.description.abstract | Governments around the world use machine learning in automated decision-making systems for a broad range of functions, including the administration and delivery of healthcare services, education, housing benefits; for surveillance; and, within policing and criminal justice systems. Algorithmic bias in machine learning can result in automated decisions that produce disparate impact, compromising Charter guarantees of substantive equality. The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. This thesis seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in automated decision-making? While acknowledging the contributions of leading standards development organizations, I argue that the rationale for standards must come from the law, and that implementing such standards would help not only to reduce future complaints, but more importantly would proactively enable human rights protections for those subject to automated decision-making. Drawing from the principles of administrative law, and the Supreme Court of Canada’s substantive equality decision in Fraser v. Canada (Attorney General), this research derives a proposed standards framework that includes: standards to mitigate the creation of biased predictions; standards for the evaluation of predictions; and, standards for the measurement of disparity in predictions. Recommendations are provided for implementing the proposed standards framework in the context of Canada’s Directive on Automated Decision-Making. | en |
dc.identifier.uri | http://hdl.handle.net/10012/18677 | |
dc.language.iso | en | en |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | artificial intelligence | en |
dc.subject | policy | en |
dc.subject | administrative law | en |
dc.subject | standards | en |
dc.subject | machine learning | en |
dc.subject | disparate impact | en |
dc.subject | government | en |
dc.subject | Directive on Automated Decision-Making | en |
dc.subject | Charter of Rights and Freedoms | en |
dc.subject | human rights | en |
dc.subject | substantive equality | en |
dc.subject | algorithmic bias | en |
dc.title | Standards for the control of algorithmic bias in the Canadian administrative context | en |
dc.type | Master Thesis | en |
uws-etd.degree | Master of Arts | en |
uws-etd.degree.department | Political Science | en |
uws-etd.degree.discipline | Political Science | en |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 0 | en |
uws.contributor.advisor | Macfarlane, Emmett | |
uws.contributor.advisor | Grossman, Maura | |
uws.contributor.affiliation1 | Faculty of Arts | en |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |