UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

CAMEO: Explaining Consensus and Expertise Across MOdels

Loading...
Thumbnail Image

Date

2024-05-02

Authors

Yu, Andy

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Explainable AI methods have been proposed to help interpret complex models, e.g., by assigning importance scores to model features or perturbing the features in a way that changes the prediction. These methods apply to one model at a time, but in practice, engineers usually select from many candidate models and hyperparameters. To assist with this task, we formulate a space of comparison operations for multiple models and demonstrate CAMEO: a web-based tool that explains consensus and expertise among multiple models. Users can interact with CAMEO using a variety of models and datasets, to explore 1) consensus patterns, such as subsets of the test dataset or intervals within feature domains where models disagree, 2) data perturbations that would make conflicting models agree (and consistent models disagree), and 3) expertise patterns, such as subsets of the test dataset where a particular model has surprising performance compared with other models.

Description

Keywords

explainability, comparisons, model performance, model bias

LC Keywords

Citation