Debugging Relational Declarative Models with Discriminating Examples

Loading...
Thumbnail Image

Date

2017-02-01

Authors

Montaghami, Vajihollah

Advisor

Rayside, Derek

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Models, especially those with mathematical or logical foundations, have proven valuable to engineering practice in a wide range of disciplines, including software engineering. Models, sometimes also referred to as logical specifications in this context, enable software engineers to focus on essential abstractions, while eliding less important details of their software design. Like any human-created artifact, a model might have imperfections at certain stages of the design process: it might have internal inconsistencies, or it might not properly express the engineer’s design intentions. Validating that the model is a true expression of the engineer’s intent is an important and difficult problem. One of the key challenges is that there is typically no other written artifact to compare the model to: the engineer’s intention is a mental object. One successful approach to this challenge has been automated example-generation tools, such as the Alloy Analyzer. These tools produce examples (satisfying valuations of the model) for the engineer to accept or reject. These examples, along with the engineer’s judgment of them, serve as crucial written artifacts of the engineer’s true intentions. Examples, like test-cases for programs, are more valuable if they reveal a discrepancy between the expressed model and the engineer’s design intentions. We propose the idea of discriminating examples for this purpose. A discriminating example is synthesized from a combination of the engineer’s expressed model and a machine-generated hypothesis of the engineer’s true intentions. A discriminating example either satisfies the model but not the hypothesis, or satisfies the hypothesis but not the model. It shows the difference between the model and the hypothesized alternative. The key to producing high-quality discriminating examples is to generate high-quality hypotheses. This dissertation explores three general forms of such hypotheses: mistakes that happen near borders; the expressed model is stronger than the engineer intends; or the expressed model is weaker than the engineer intends. We additionally propose a number of heuristics to guide the hypothesis-generation process. We demonstrate the usefulness of discriminating examples and our hypothesis-generation techniques through a case study of an Alloy model of Dijkstra’s Dining Philosophers problem. This model was written by Alloy experts and shipped with the Alloy Analyzer for several years. Previous researchers discovered the existence of a bug, but there has been no prior published account explaining how to fix it, nor has any prior tool been shown effective for assisting an engineer with this task. Generating high-quality discriminating examples and their underlying hypotheses is computationally demanding. This dissertation shows how to make it feasible.

Description

Keywords

Software enigeering, Formal methods, Alloy, Alloy Analyzer, Software model-checking, Debug patterns, Discriminating examples, Non-example, Near-miss example, Near-hit example, Pattern-based debugging, Partial-instance

LC Keywords

Citation