Verbeurgt, Karsten A.2006-07-282006-07-2819981998http://hdl.handle.net/10012/347The work of this thesis is motivated by the apparent computational difficulty of practical problems from artificial intelligence. Herein, we study two particular AI problems: the constraint satisfaction problem of coherence, and the machine learning problem of learning a sub-class of monotone DNF formulas from examples. For both of these problems, we apply approximation techniques to obtain near-optimal solutions in polynomial time: thus trading off quality of the solution for computational tractability. The constraint satisfaction problem we study is the coherence problem, which is a restricted version of binary constraint satisfaction. For this problem, we apply semidefinite programming techniques to derive a 0.878-approximation algorithm. We also show extensions of this result to the problem of settling a neural network to a stable state. The approximation model we use for the machine learning problem is the Probably Approximately Correct (PAC) model, due to Valiant [Val 84]. This is a theoretical model for concept learning from examples, where the examples are drawn at random from a fixed probability distribution. Within this model, we consider the learnability of sub-classes of monotone DNF formulas on the uniform distribution. We introduce the classes of one-read-once monotone DNF formulas, and factorable read-once monotone DNF formulas, both of which are generalizations of the well-studied read-once DNF formulas, and give learnability results for these classes.application/pdf4392185 bytesapplication/pdfenCopyright: 1998, Verbeurgt, Karsten A.. All rights reserved.Harvested from Collections CanadaApproximation of some AI problemsDoctoral Thesis