University of Waterloo >
Electronic Theses and Dissertations (UW) >
Please use this identifier to cite or link to this item:
|Title: ||Symmetry Induction in Computational Intelligence|
|Authors: ||Ventresca, Mario|
|Keywords: ||Symmetry Induction|
|Approved Date: ||6-Nov-2009 |
|Date Submitted: ||2009 |
|Abstract: ||Symmetry has been a very useful tool to researchers in various scientific fields. At its most basic,
symmetry refers to the invariance of an object to some transformation, or set of transformations.
Usually one searches for, and uses information concerning an existing symmetry within given data,
structure or concept to somehow improve algorithm performance or compress the search space.
This thesis examines the effects of imposing or inducing symmetry on a search space. That is, the
question being asked is whether only existing symmetries can be useful, or whether changing
reference to an intuition-based definition of symmetry over the evaluation function can also be of
use. Within the context of optimization, symmetry induction as defined in this thesis will have the
effect of equating the evaluation of a set of given objects.
Group theory is employed to explore possible symmetrical structures inherent in a search space.
Additionally, conditions when the search space can have a symmetry induced on it are examined. The
idea of a neighborhood structure then leads to the idea of opposition-based computing which aims
to induce a symmetry of the evaluation function. In this context, the search space can be seen as
having a symmetry imposed on it. To be useful, it is shown that an opposite map must be defined
such that it equates elements of the search space which have a relatively large difference in their
respective evaluations. Using this idea a general framework for employing opposition-based ideas
is proposed. To show the efficacy of these ideas, the framework is applied to popular computational
intelligence algorithms within the areas of Monte Carlo optimization, estimation of distribution and
neural network learning.
The first example application focuses on simulated annealing, a popular Monte Carlo optimization
algorithm. At a given iteration, symmetry is induced on the system by considering opposite
neighbors. Using this technique, a temporary symmetry over the neighborhood region is induced.
This simple algorithm is benchmarked using common real optimization problems and compared against
traditional simulated annealing as well as a randomized version. The results highlight improvements
in accuracy, reliability and convergence rate. An application to image thresholding further
confirms the results.
Another example application, population-based incremental learning, is rooted in estimation of
distribution algorithms. A major problem with these techniques is a rapid loss of diversity within
the samples after a relatively low number of iterations. The opposite sample is introduced as a
remedy to this problem. After proving an increased diversity, a new probability update procedure is
designed. This opposition-based version of the algorithm is benchmarked using common binary
optimization problems which have characteristics of deceptivity and attractive basins
characteristic of difficult real world problems. Experiments reveal improvements in diversity,
accuracy, reliability and convergence rate over the traditional approach. Ten instances of the
traveling salesman problem and six image thresholding problems are used to further highlight the
Finally, gradient-based learning for feedforward neural networks is improved using opposition-based
ideas. The opposite transfer function is presented as a simple adaptive neuron which easily allows
for efficiently jumping in weight space. It is shown that each possible opposite network represents
a unique input-output mapping, each having an associated effect on the numerical conditioning of
the network. Experiments confirm the potential of opposite networks during pre- and early training
stages. A heuristic for efficiently selecting one opposite network per epoch is presented.
Benchmarking focuses on common classification problems and reveals improvements in accuracy,
reliability, convergence rate and generalization ability over common backpropagation variants. To
further show the potential, the heuristic is applied to resilient propagation where similar
improvements are also found.|
|Program: ||System Design Engineering|
|Department: ||Systems Design Engineering|
|Degree: ||Doctor of Philosophy|
|Appears in Collections:||Faculty of Engineering Theses and Dissertations |
Electronic Theses and Dissertations (UW)
All items in UWSpace are protected by copyright, with all rights reserved.