The Limited Effectiveness of Neural Networks for Simple Question Answering on Knowledge Graphs
MetadataShow full item record
Simple factoid question answering (QA) is a task, where the questions can be answered by looking up a single fact in the knowledge base (KB). However, this QA task is difficult, since retrieving a single supporting fact involves searching many alternatives given a query expressed in natural language. We use a retrieval-based approach to QA. We decompose the problem into four sub-problems: entity detection, entity linking, relation prediction, and evidence integration. Entity detection and linking rely on detecting the entities in a question and linking them to the candidate entities in the KB. Relation prediction classifies a question as one of the relation types in the KB. Finally, evidence integration combines scores from entity linking and relation prediction to predict an (entity, relation) pair that answers the question. Most of the research community has explored complex neural network architectures for this task without establishing baselines to compare the results with `non-neural-network' approaches. We explore several different models for entity detection and relation prediction; a few different scoring functions for entity linking and evidence integration. Our findings show that deep learning does help for the QA task, but not as much as the research community has portrayed it to be. We also present two simple yet very competitive baselines: one based on a simple neural network architecture and one that does not use any neural networks.
Cite this version of the work
Salman Mohammed (2017). The Limited Effectiveness of Neural Networks for Simple Question Answering on Knowledge Graphs. UWSpace. http://hdl.handle.net/10012/12689