Please use this identifier to cite or link to this item: http://hdl.handle.net/10012/4485

 Title: Applications of Description Logic and Causality in Model Checking Authors: Ben-David, Shoham Keywords: Model CheckingDescription LogicCausality Approved Date: 19-Jun-2009 Date Submitted: 2009 Abstract: Model checking is an automated technique for the verification of finite-state systems that is widely used in practice. In model checking, a model M is verified against a specification $\varphi$, exhaustively checking that the tree of all computations of M satisfies $\varphi$. When $\varphi$ fails to hold in M, the negative result is accompanied by a counterexample: a computation in M that demonstrates the failure. State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore alternative technologies, in the hope of broadening the applicability of the technique to a wider class of systems. Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic. DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies. This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems. We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of encodings that translate such problems into consistency queries in DL. Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can outperform existing ones. In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure. Program: Computer Science Department: School of Computer Science Degree: Doctor of Philosophy URI: http://hdl.handle.net/10012/4485 Appears in Collections: Electronic Theses and Dissertations (UW) Faculty of Mathematics Theses and Dissertations

Files in This Item:

File Description SizeFormat
thesis.pdf665.93 kBAdobe PDFView/Open

 This item is protected by original copyright View Licence

All items in UWSpace are protected by copyright, with all rights reserved.

 University of Waterloo Library 200 University Avenue West Waterloo, Ontario, Canada N2L 3G1 519 888 4883 contact us | give us feedback | http://www.lib.uwaterloo.ca | © 2006 University of Waterloo