Error Detection in Number-Theoretic and Algebraic Algorithms
Loading...
Date
2008-08-26T18:46:06Z
Authors
Vasiga, Troy Michael John
Advisor
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.
Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,
2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,
3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.
Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
Description
Keywords
algorithm analysis, error detection, primality testing