Computer Science
http://hdl.handle.net/10012/9930
Sun, 22 Oct 2017 02:43:05 GMT2017-10-22T02:43:05ZHalfway to Halfspace Testing
http://hdl.handle.net/10012/12557
Halfway to Halfspace Testing
Harms, Nathaniel
In this thesis I study the problem of testing halfspaces under arbitrary probability distributions, using only random samples. A halfspace, or linear threshold function, is a boolean function f : Rⁿ → {±1} defined as the sign of a linear function; that is,
f(x) = sign(Σᵢ wᵢxᵢ - θ)
where we refer to w ∈ Rⁿ as a weight vector and θ ∈ R as a threshold. These functions have been studied intensively since the middle of the 20th century; they appear in many places, including social choice theory (the theory of voting rules), circuit complexity theory, machine learning theory, hardness of approximation, and the analysis of boolean functions.
The problem of testing halfspaces, in the sense of property testing, is to design an algorithm that, with high probability, decides whether an unknown function f is a halfspace function or far from a halfspace, using as few examples of labelled points (x, f (x)) as possible. In this work I focus on the problem of testing halfspaces using only random examples drawn from an arbitrary distribution, and the algorithm cannot choose the points it receives. This is in contrast with previous work on the problem, where the algorithm can query points of its choice, and the distribution was assumed to be uniform over the boolean hypercube.
Towards a solution to this problem I present an algorithm that works for rotationally invariant probability distributions (under reasonable conditions), using roughly O(√n) random examples, which is close to the known lower bound of Ω(√n/ √log n) . I further develop the algorithm to work for mixtures of two such rotationally invariant distributions and provide a partial analysis. I also survey related machine learning results, and conclude with a survey of the theory of halfspaces over the boolean hypercube, which has recently received much attention.
Wed, 18 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10012/125572017-10-18T00:00:00ZQuotient Complexity of Bifix-, Factor-, and Subword-free Regular Language
http://hdl.handle.net/10012/12532
Quotient Complexity of Bifix-, Factor-, and Subword-free Regular Language
Brzozowski, Janusz A.; Jirásková, Galina; Baiyu, Li; Smith, Joshua
A language $L$ is prefix-free if whenever words $u$ and $v$ are in $L$ and $u$ is a prefix of $v$, then $u=v$. Suffix-, factor-, and subword-free languages are defined similarly, where by ``subword" we mean ``subsequence", and a language is bifix-free if it is both prefix- and suffix-free. These languages have important applications in coding theory. The quotient complexity of an operation on regular languages is defined as the number of left quotients of the result of the operation as a function of the numbers of left quotients of the operands. The quotient complexity of a regular language is the same as its state complexity, which is the number of states in the complete minimal deterministic finite automaton accepting the language. The state/quotient complexity of operations in the classes of prefix- and suffix-free languages has been studied before. Here, we study the complexity of operations in the classes of bifix-, factor-, and subword-free languages. We find tight upper bounds on the quotient complexity of intersection, union, difference, symmetric difference, concatenation, star, and reversal in these three classes of languages.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10012/125322014-01-01T00:00:00ZComplexity of Right-Ideal, Prefix-Closed, and Prefix-Free Regular Languages
http://hdl.handle.net/10012/12530
Complexity of Right-Ideal, Prefix-Closed, and Prefix-Free Regular Languages
Brzozowski, Janusz A.; Sinnamon, Corwin
A language L over an alphabet E is prefix-convex if, for any words x, y, z is an element of Sigma*, whenever x and xyz are in L, then so is xy. Prefix-convex languages include right-ideal, prefix-closed, and prefix-free languages as special cases. We examine complexity properties of these special prefix-convex languages. In particular, we study the quotient/state complexity of boolean operations, product (concatenation), star, and reversal, the size of the syntactic semi group, and the quotient complexity of atoms. For binary operations we use arguments with different alphabets when appropriate; this leads to higher tight upper bounds than those obtained with equal alphabets. We exhibit right-ideal, prefix-closed, and prefix-free languages that meet the complexity bounds for all the measures listed above.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10012/125302017-01-01T00:00:00ZQuotient Complexities of Atoms in Regular Ideal Languages
http://hdl.handle.net/10012/12531
Quotient Complexities of Atoms in Regular Ideal Languages
Brzozowski, Janusz A.; Davies, Sylvie
A (left) quotient of a language L by a word w is the language w(-1) L = {x vertical bar wx is an element of L}. The quotient complexity of a regular language L is the number of quotients of L; it is equal to the state complexity of L, which is the number of states in a minimal deterministic finite automaton accepting L. An atom of L is an equivalence class of the relation in which two words are equivalent if for each quotient, they either are both in the quotient or both not in it; hence it is a non-empty intersection of complemented and uncomplemented quotients of L. A right (respectively, left and two-sided) ideal is a language L over an alphabet Sigma that satisfies L = L Sigma* (respectively, L = Sigma*L and L = Sigma*L Sigma*). We compute the maximal number of atoms and the maximal quotient complexities of atoms of right, left and two-sided regular ideals.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10012/125312015-01-01T00:00:00Z