Golzadeh, Kiarash2024-05-062024-05-062024-05-062024-05-01http://hdl.handle.net/10012/20541Expert search and team formation systems operate on collaboration networks with nodes representing individuals, labeled with their skills, and edges denoting collaboration relationships. Given a query corresponding to a set of desired skills, these systems identify experts or teams that best match the query. However, state-of-the-art solutions to this problem lack transparency and interpretability. To address this issue, we propose ExES, an interactive tool designed to explain black-box expert search systems. Our system leverages saliency and counterfactual methods from the field of explainable artificial intelligence (XAI). ExES enables users to understand why individuals were or were not included in the query results and what individuals could do, in terms of perturbing skills or connections, to be included or excluded in the results. Based on several experiments using real-world datasets, we verify the quality and efficiency of our explanation generation methods. We demonstrate that ExES takes a significant step toward interactivity by achieving an average latency reduction of 50% in comparison to an exhaustive approach while maintaining over 82% precision in producing saliency explanations and over 70% precision in identifying optimal counterfactual explanations.enExplaining Expert Search and Team Formation Systems with ExESMaster Thesis