Approximation of some AI problems
Loading...
Date
1998
Authors
Verbeurgt, Karsten A.
Advisor
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
The work of this thesis is motivated by the apparent computational difficulty of practical problems from artificial intelligence. Herein, we study two particular AI problems: the constraint satisfaction problem of coherence, and the machine learning problem of learning a sub-class of monotone DNF formulas from examples. For both of these problems, we apply approximation techniques to obtain near-optimal solutions in polynomial time: thus trading off quality of the solution for computational tractability.
The constraint satisfaction problem we study is the coherence problem, which is a restricted version of binary constraint satisfaction. For this problem, we apply semidefinite programming techniques to derive a 0.878-approximation algorithm. We also show extensions of this result to the problem of settling a neural network to a stable state.
The approximation model we use for the machine learning problem is the Probably Approximately Correct (PAC) model, due to Valiant [Val 84]. This is a theoretical model for concept learning from examples, where the examples are drawn at random from a fixed probability distribution. Within this model, we consider the learnability of sub-classes of monotone DNF formulas on the uniform distribution. We introduce the classes of one-read-once monotone DNF formulas, and factorable read-once monotone DNF formulas, both of which are generalizations of the well-studied read-once DNF formulas, and give learnability results for these classes.
Description
Keywords
Harvested from Collections Canada