Show simple item record

dc.contributor.authorHeisler, Natalie
dc.date.accessioned2022-08-30 17:29:22 (GMT)
dc.date.available2022-08-30 17:29:22 (GMT)
dc.date.issued2022-08-30
dc.date.submitted2022-08-05
dc.identifier.urihttp://hdl.handle.net/10012/18677
dc.description.abstractGovernments around the world use machine learning in automated decision-making systems for a broad range of functions, including the administration and delivery of healthcare services, education, housing benefits; for surveillance; and, within policing and criminal justice systems. Algorithmic bias in machine learning can result in automated decisions that produce disparate impact, compromising Charter guarantees of substantive equality. The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. This thesis seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in automated decision-making? While acknowledging the contributions of leading standards development organizations, I argue that the rationale for standards must come from the law, and that implementing such standards would help not only to reduce future complaints, but more importantly would proactively enable human rights protections for those subject to automated decision-making. Drawing from the principles of administrative law, and the Supreme Court of Canada’s substantive equality decision in Fraser v. Canada (Attorney General), this research derives a proposed standards framework that includes: standards to mitigate the creation of biased predictions; standards for the evaluation of predictions; and, standards for the measurement of disparity in predictions. Recommendations are provided for implementing the proposed standards framework in the context of Canada’s Directive on Automated Decision-Making.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectartificial intelligenceen
dc.subjectpolicyen
dc.subjectadministrative lawen
dc.subjectstandardsen
dc.subjectmachine learningen
dc.subjectdisparate impacten
dc.subjectgovernmenten
dc.subjectDirective on Automated Decision-Makingen
dc.subjectCharter of Rights and Freedomsen
dc.subjecthuman rightsen
dc.subjectsubstantive equalityen
dc.subjectalgorithmic biasen
dc.titleStandards for the control of algorithmic bias in the Canadian administrative contexten
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentPolitical Scienceen
uws-etd.degree.disciplinePolitical Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Artsen
uws-etd.embargo.terms0en
uws.contributor.advisorMacfarlane, Emmett
uws.contributor.advisorGrossman, Maura
uws.contributor.affiliation1Faculty of Artsen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages