A Framework for Explaining LLM Reasoning with Knowledge Graphs

Loading...
Thumbnail Image

Advisor

Golab, Lukasz

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Large Language Models (LLMs) have demonstrated remarkable question-answering (QA) capabilities, yet their decision processes and outputs often remain opaque and prone to factual inconsistencies. While existing methods evaluate or ground LLM outputs after generation, they typically lack mechanisms for aligning LLM reasoning with external knowledge sources. This thesis introduces Apr`esCoT, a lightweight model-agnostic framework that validates LLM reasoning by grounding it in an external knowledge graph (KG). Apr`esCoT operates through three main components: Subgraph Retrieval, which extracts a KG subgraph relevant to a given query; Triple Extraction and Parsing, which converts the LLM’s output into factual triples; and Matching, which aligns these triples with entities and relations in the extracted KG subgraph. The integration of these modules enables alignment between LLM reasoning and structured knowledge, producing traceable and structured explanations alongside model outputs. We evaluate alternative retrieval and matching strategies, analyze their trade-offs, and demonstrate how Apr`esCoT helps users surface reasoning gaps, hallucinations, and missing facts. Experiments across multiple domains, including large-scale KGs, highlight Apr`esCoT’s effectiveness in advancing trustworthy and explainable AI.

Description

LC Subject Headings

Citation