Systems Design Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9914
This is the collection for the University of Waterloo's Department of Systems Design Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Systems Design Engineering by Issue Date
Now showing 1 - 20 of 754
- Results Per Page
- Sort Options
Item Color Image Edge Detection and Segmentation: A Comparison of the Vector Angle and the Euclidean Distance Color Similarity Measures(University of Waterloo, 1999) Wesolkowski, Slawomir BogumilThis work is based on Shafer's Dichromatic Reflection Model as applied to color image formation. The color spaces RGB, XYZ, CIELAB, CIELUV, rgb, l1l2l3, and the new h1h2h3 color space are discussed from this perspective. Two color similarity measures are studied: the Euclidean distance and the vector angle. The work in this thesis is motivated from a practical point of view by several shortcomings of current methods. The first problem is the inability of all known methods to properly segment objects from the background without interference from object shadows and highlights. The second shortcoming is the non-examination of the vector angle as a distance measure that is capable of directly evaluating hue similarity without considering intensity especially in RGB. Finally, there is inadequate research on the combination of hue- and intensity-based similarity measures to improve color similarity calculations given the advantages of each color distance measure. These distance measures were used for two image understanding tasks: edge detection, and one strategy for color image segmentation, namely color clustering. Edge detection algorithms using Euclidean distance and vector angle similarity measures as well as their combinations were examined. The list of algorithms is comprised of the modified Roberts operator, the Sobel operator, the Canny operator, the vector gradient operator, and the 3x3 difference vector operator. Pratt's Figure of Merit is used for a quantitative comparison of edge detection results. Color clustering was examined using the k-means (based on the Euclidean distance) and Mixture of Principal Components (based on the vector angle) algorithms. A new quantitative image segmentation evaluation procedure is introduced to assess the performance of both algorithms. Quantitative and qualitative results on many color images (artificial, staged scenes and natural scene images) indicate good edge detection performance using a vector version of the Sobel operator on the h1h2h3 color space. The results using combined hue- and intensity-based difference measures show a slight improvement qualitatively and over using each measure independently in RGB. Quantitative and qualitative results for image segmentation on the same set of images suggest that the best image segmentation results are obtained using the Mixture of Principal Components algorithm on the RGB, XYZ and rgb color spaces. Finally, poor color clustering results in the h1h2h3 color space suggest that some assumptions in deriving a simplified version of the Dichromatic Reflectance Model might have been violated.Item A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations(University of Waterloo, 2000) Seth, AjayOptimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.Item Understanding and Improving Undergraduate Engineering Education(University of Waterloo, 2001) Foster, JasonThis thesis seeks to understand the past and present state of engineering education and to plot a course for its future evolution. This research is limited to engineering education as it has taken place in North American universities during the last half of the 20th century. Within this context, broad trends are described. The description is supplemented with a case study of a unique and innovative engineering programme. The trends and case study form the foundation of a synthesis, and alternative vision, for higher education and engineering education. The intended audience of this thesis includes those who teach, design curriculum, or administer engineering education programmes. The description of the current state of engineering education contains analyses of the state and of the gaps within it. Both of these analyses are based almost exclusively on publicly available documentation. The present state of engineering is drawn from accreditation criteria. Critiques of the current state and suggestions for future change are drawn from reports commissioned by groups affiliated with professional engineering. The discussions identify recurring themes and patterns. Unlike the analysis of the literature, the case study merges interview evidence and personal experience with the available documentation. The synthesis and visions continue the trend away from formal sources towards experiences and beliefs. Engineering education research is in its infancy and shows few signs of maturing. There is no documented, common framing of engineering education nor have there been any efforts in this regard. Few sources address broad issues and those that do lack theoretical rigour. The visions for engineering education are simple amalgams of visions for the profession and for general higher education. The Department of Systems Design Engineering has enjoyed great past successes because of its unique vision that combines the theories of systems, complexity, and design with the discipline of engineering. Its recent decay can be traced to its faculty having collectively lost this vision. The original vision for Systems Design Engineering holds promise as a means to reinvent and reinvigorate both the engineering profession and engineering education. For this renaissance to be successful a theoretically rigorous research programme assessing the past, present, and future of engineering and engineering education must be developed.Item Modelling Hysteresis in the Bending of Fabrics(University of Waterloo, 2002) Lahey, TimothyThis thesis presents a model of fabric bending hysteresis. The hysteresis model is designed to reproduce the fabric bending measurements taken by the Kawabata Evaluation System (KES) and the model parameters can be derived directly from these property measurements. The advantage to using this technique is that it provides the ability to simulate a continuum of property curves. Results of the model and its components are compared and constrasted with experimental results for fabrics composed of different weaves and yarn types. An attempt to incorporate the bending model as part of a fabric drape simulation is also made.Item Evolutionary Design for Computational Visual Attention(University of Waterloo, 2003) Bruce, NeilA new framework for simulating the visual attention system in primates is introduced. The proposed architecture is an abstraction of existing approaches influenced by the work of Koch and Ullman, and Tompa. Each stage of the attentional hierarchy is chosen with consideration for both psychophysics and mathematical optimality. A set of attentional operators are derived that act on basic image channels of intensity, hue and orientation to produce maps representing perceptual importance of each image pixel. The development of such operators is realized within the context of a genetic optimization. The model includes the notion of an information domain where feature maps are transformed to a domain that more closely corresponds to the human visual system. A careful analysis of various issues including feature extraction, density estimation and data fusion is presented within the context of the visual attention problem.Item Dynamic Model of a Piano Action Mechanism(University of Waterloo, 2004) Hirschkorn, Martin C.While some attempts have been made to model the behaviour of the grand piano action (the mechanism that translates a key press into a hammer striking a string), most researchers have reduced the system to a simple model with little relation to the components of a real action. While such models are useful for certain applications, they are not appropriate as design tools for piano makers, since the model parameters have little physical meaning and must be calibrated from the behaviour of a real action. A new model for a piano action is proposed in this thesis. The model treats each of the five main action components (key, whippen, jack, repetition lever, and hammer) as a rigid body. The action model also incorporates a contact model to determine the normal and friction forces at 13 locations between each of the contacting bodies. All parameters in the model are directly measured from the physical properties of individual action components, allowing the model to be used as a prototyping tool for actions that have not yet been built. To test whether the model can accurately predict the behaviour of a piano action, an experimental apparatus was built. Based around a keyboard from a Boston grand piano, the apparatus uses an electric motor to actuate the key, a load cell to measure applied force, and optical encoders and a high speed video camera to measure the positions of the bodies. The apparatus was found to produce highly repeatable, reliable measurements of the action. The behaviour of the action model was compared to the measurements from the experimental apparatus for several types of key blows from a pianist. A qualitative comparison showed that the model could very accurately reproduce the behaviour of a real action for high force blows. When the forces were lower, the behaviour of the action model was still reasonable, but some discrepancy from the experimental results could be seen. In order to reduce the discrepancy, it was recommended that certain improvements could be made to the action model. Rigid bodies, most importantly the key and hammer, should be replaced with flexible bodies. The normal contact model should be modified to account for the speed-independent behaviour of felt compression. Felt bushings that are modelled as perfect revolute joints should instead be modelled as flexible contact surfaces.Item Determinants of Increased Energy Cost in Prosthetic Gait(University of Waterloo, 2004) Peasgood, MichaelThe physiological energy requirements of prosthetic gait in lower-limb amputees have been observed to be significantly greater than those for able-bodied subjects. However, existing models of energy flow in walking have not been very successful in explaining the reasons for this additional energy cost. Existing mechanical models fail to capture all of the components of energy cost involved in human walking. In this thesis, a new model is developed that estimates the physiological cost of walking for an able-bodied individual; the same cost of walking is then computed using a variation of the model that represents a bi-lateral below-knee amputee. The results indicate a higher physiological cost for the amputee model, suggesting that the model more accurately represents the relative metabolic costs of able-bodied and amputee walking gait. The model is based on a two-dimensional multi-body mechanical model that computes the joint torques required for a specified pattern of joint kinematics. In contrast to other models, the mechanical model includes a balance controller component that dynamically maintains the stability of the model during the walking simulation. This allows for analysis of many consecutive steps, and includes in the metabolic cost estimation the energy required to maintain balance. A muscle stress based calculation is used to determine the optimal muscle force distribution required to achieve the joint torques computed by the mechanical model. This calculation is also used as a measure of the metabolic energy cost of the walking simulation. Finally, an optimization algorithm is applied to the joint kinematic patterns to find the optimal walking motion for the model. This approach allows the simulation to find the most energy efficient gait for the model, mimicking the natural human tendency to walk with the most efficient stride length and speed.Item E-Intelligence Form Design and Data Preprocessing in Health Care(University of Waterloo, 2004) Pedarla, PadmajaClinical data systems continue to grow as a result of the proliferation of features that are collected and stored. Demands for accurate and well-organized clinical data have intensified due to the increased focus on cost-effectiveness, and continuous quality improvement for better clinical diagnosis and prognosis. Clinical organizations have opportunities to use the information they collect and their oversight role to enhance health safety. Due to the continuous growth in the number of parameters that are accumulated in large databases, the capability of interactively mining patient clinical information is an increasingly urgent need to the clinical domain for providing accurate and efficient health care. Simple database queries fail to address this concern for several problems like the lack of the use of knowledge contained in these extremely complex databases. Data mining addresses this problem by analyzing the databases and making decisions based on the hidden patterns. The collection of data from multiple locations in clinical organizations leads to the loss of data in data warehouses. Data preprocessing is the part of knowledge discovery where the data is cleaned and transformed to perform accurate and efficient data mining results. Missing values in the databases result in the loss of useful data. Handling missing values and reducing noise in the data is necessary to acquire better quality mining results. This thesis explores the idea of either rejecting inappropriate values during the data entry level or suggesting various methods of handling missing values in the databases. E-Intelligence form is designed to perform the data preprocessing tasks at different levels of the knowledge discovery process. Here the minimum data set of mental health and the breast cancer data set are used as case studies. Once the missing values are handled, decision trees are used as the data mining tool to perform the classification of the diagnosis of the databases for analyzing the results. Due to the ever increasing mobile devices and internet in health care, the analysis here also addresses issues relevant hand-held computers and communicational devices or web based applications for quick and better access.Item Multi-resolution Image Segmentation using Geometric Active Contours(University of Waterloo, 2004) Tsang, Po-YanImage segmentation is an important step in image processing, with many applications such as pattern recognition, object detection, and medical image analysis. It is a technique that separates objects of interests from the background in an image. Geometric active contour is a recent image segmentation method that overcomes previous problems with snakes. It is an attractive method for medical image segmentation as it is able to capture the object of interest in one continuous curve. The theory and implementation details of geometric active contours are discussed in this work. The robustness of the algorithm is tested through a series of tests, involving both synthetic images and medical images. Curve leaking past boundaries is a common problem in cases of non-ideal edges. Noise is also problematic for the advancement of the curve. Smoothing and parameters selection are discussed as ways to help solve these problems. This work also explores the incorporation of the multi-resolution method of Gaussian pyramids into the algorithm. Multi-resolution methods, used extensively in the areas of denoising and edge-selection, can help capture the spatial structure of an image. Results show that similar to the multi-resolution methods applied to parametric active contours, the multi-resolution can greatly increase the computation without sacrificing performance. In fact, results show that with successive smoothing and sub-sampling, performance often improves. Although smoothing and parameter adjustment help improve the performance of geometric active contours, the edge-based approach is still localized and the improvement is limited. Region-based approaches are recommended for further work on active contours.Item Reinforcement Learning for Parameter Control of Image-Based Applications(University of Waterloo, 2004) Taylor, GrahamThe significant amount of data contained in digital images present barriers to methods of learning from the information they hold. Noise and the subjectivity of image evaluation further complicate such automated processes. In this thesis, we examine a particular area in which these difficulties are experienced. We attempt to control the parameters of a multi-step algorithm that processes visual information. A framework for approaching the parameter selection problem using reinforcement learning agents is presented as the main contribution of this research. We focus on the generation of state and action space, as well as task-dependent reward. We first discuss the automatic determination of fuzzy membership functions as a specific case of the above problem. Entropy of a fuzzy event is used as a reinforcement signal. Membership functions representing brightness have been automatically generated for several images. The results show that the reinforcement learning approach is superior to an existing simulated annealing-based approach. The framework has also been evaluated by optimizing ten parameters of the text detection for semantic indexing algorithm proposed by Wolf et al. Image features are defined and extracted to construct the state space. Generalization to reduce the state space is performed with the fuzzy ARTMAP neural network, offering much faster learning than in the previous tabular implementation, despite a much larger state and action space. Difficulties in using a continuous action space are overcome by employing the DIRECT method for global optimization without derivatives. The chosen parameters are evaluated using metrics of recall and precision, and are shown to be superior to the parameters previously recommended. We further discuss the interplay between intermediate and terminal reinforcement.Item Application of Data mining in Medical Applications(University of Waterloo, 2004) Eapen, Arun GeorgeAbstract Data mining is a relatively new field of research whose major objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make timely and accurate decisions. Two medical databases are considered, one for describing the various tools and the other as the case study. The first database is related to breast cancer and the second is related to the minimum data set for mental health (MDS-MH). The breast cancer database consists of 10 attributes and the MDS-MH dataset consists of 455 attributes. As there are a number of data mining algorithms and tools available we consider only a few tools to evaluate on these applications and develop classification rules that can be used in prediction. Our results indicate that for the major case study, namely the mental health problem, over 70 to 80% accurate results are possible. A further extension of this work is to make available classification rules in mobile devices such as PDAs. Patient information is directly inputted onto the PDA and the classification of these inputted values takes place based on the rules stored on the PDA to provide real time assistance to practitioners.Item Preserving Texture Boundaries for SAR Sea Ice Segmentation(University of Waterloo, 2004) Jobanputra, RishiTexture analysis has been used extensively in the computer-assisted interpretation of SAR sea ice imagery. Provision of maps which distinguish relevant ice types is significant for monitoring global warming and ship navigation. Due to the abundance of SAR imagery available, there exists a need to develop an automated approach for SAR sea ice interpretation. Grey level co-occurrence probability (GLCP) texture features are very popular for SAR sea ice classification. Although these features are used extensively in the literature, they have a tendency to erode and misclassify texture boundaries. Proposed is an advancement to the GLCP method which will preserve texture boundaries during image segmentation. This method exploits the relationship a pixel has with its closest neighbors and weights the texture measurement accordingly. These texture features are referred to as WGLCP (weighted GLCP) texture features. In this research, the WGLCP and GLCP feature sets are compared in terms of boundary preservation, unsupervised segmentation ability, robustness to increasing boundary density and computation time. The WGLCP method outperforms the GLCP method in all aspects except for computation time, where it suffers. From the comparative analysis, an inconsistency with the GLCP correlation statistic was observed, which motivated an investigative study into using this statistic for image segmentation. As the overall goal of the thesis is to improve SAR sea ice segmentation accuracy, the concepts developed from the study are applied to the image segmentation problem. The results indicate that for images with high contrast boundaries, the GLCP correlation statistical feature decreases segmentation accuracy. When comparing WGLCP and GLCP features for segmentation, the WGLCP features provide higher segmentation accuracy.Item Unsupervised Clustering and Automatic Language Model Generation for ASR(University of Waterloo, 2004) Podder, SushilThe goal of an automatic speech recognition system is to enable the computer in understanding human speech and act accordingly. In order to realize this goal, language modeling plays an important role. It works as a knowledge source through mimicking human comprehension mechanism in understanding the language. Among many other approaches, statistical language modeling technique is widely used in automatic speech recognition systems. However, the generation of reliable and robust statistical model is very difficult task, especially for a large vocabulary system. For a large vocabulary system, the performance of such a language model degrades as the vocabulary size increases. Hence, the performance of the speech recognition system also degrades due to the increased complexity and mutual confusion among the candidate words in the language model. In order to solve these problems, reduction of language model size as well as minimization of mutual confusion between words are required. In our work, we have employed clustering techniques, using self-organizing map, to build topical language models. Moreover, in order to capture the inherent semantics of sentences, a lexical dictionary, WordNet has been used in the clustering process. This thesis work focuses on various aspects of clustering, language model generation, extraction of task dependent acoustic parameters, and their implementations under the framework of the CMU Sphinx3 speech engine decoder. The preliminary results, presented in this thesis show the effectiveness of the topical language models.Item Modeling Continuous Emotional Appraisals of Music Using System Identification(University of Waterloo, 2004) Korhonen, MarkThe goal of this project is to apply system identification techniques to model people's perception of emotion in music as a function of time. Emotional appraisals of six selections of classical music are measured from volunteers who continuously quantify emotion using the dimensions valence and arousal. Also, features that communicate emotion are extracted from the music as a function of time. By treating the features as inputs to a system and the emotional appraisals as outputs of that system, linear models of the emotional appraisals are created. The models are validated by predicting a listener's emotional appraisals of a musical selection (song) unfamiliar to the system. The results of this project show that system identification provides a means to improve previous models for individual songs by allowing them to generalize emotional appraisals for a genre of music. The average R² statistic of the best model structure in this project is 7. 7% for valence and 75. 1% for arousal, which is comparable to the R² statistics for models of individual songs.Item Image Models for Wavelet Domain Statistics(University of Waterloo, 2005) Azimifar, Seyedeh-ZohrehStatistical models for the joint statistics of image pixels are of central importance in many image processing applications. However the high dimensionality stemming from large problem size and the long-range spatial interactions make statistical image modeling particularly challenging. Commonly this modeling is simplified by a change of basis, mostly using a wavelet transform. Indeed, the wavelet transform has widely been used as an approximate whitener of statistical time series. It has, however, long been recognized that the wavelet coefficients are neither Gaussian, in terms of the marginal statistics, nor white, in terms of the joint statistics. The question of wavelet joint models is complicated and admits for possibilities, with statistical structures within subbands, across orientations, and scales. Although a variety of joint models have been proposed and tested, few models appear to be directly based on empirical studies of wavelet coefficient cross-statistics. Rather, they are based on intuitive or heuristic notions of wavelet neighborhood structures. Without an examination of the underlying statistics, such heuristic approaches necessarily leave unanswered questions of neighborhood sufficiency and necessity. This thesis presents an empirical study of joint wavelet statistics for textures and other imagery including dependencies across scale, space, and orientation. There is a growing realization that modeling wavelet coefficients as independent, or at best correlated only across scales, may be a poor assumption. While recent developments in wavelet-domain Hidden Markov Models (notably HMT-3S) account for within-scale dependencies, we find that wavelet spatial statistics are strongly orientation dependent, structures which are surprisingly not considered by state-of-the-art wavelet modeling techniques. To demonstrate the effectiveness of the studied wavelet correlation models a novel non-linear correlated empirical Bayesian shrinkage algorithm based on the wavelet joint statistics is proposed. In comparison with popular nonlinear shrinkage algorithms, it improves the denoising results.Item A Study on Urban Water Reuse Management Modeling(University of Waterloo, 2005) Zhang, ChangyuThis research deals with urban water reuse planning and management modeling in the context of sustainable development. Rapid urbanization and population growth have presented a great challenge to urban water resources management. As water reuse may alleviate pollution loads and enhance water supply sources, water reuse is being recognized as a sustainable urban water management strategy and is becoming increasingly attractive in urban water resources management. An efficient water reuse planning and management model is of significance in promoting water reuse practices. This thesis introduces an urban water reuse management and planning model using optimization methods with an emphasis on modeling uncertainty issues associated with water demand and water quality. The model is developed in conjunction with the overall urban water system with considerations over water supply, water demand, water distribution, water quality, and wastewater treatment and discharge. The objective of the model is to minimize the overall cost of the system subject to technological, societal and environmental constraints. Uncertainty issues associated with water demand and treatment quality are modeled by introducing stochastic programming methods, namely, two-stage stochastic recourse programming and chance-constraint programming. The model is capable of identifying and evaluating water reuse in urban water systems to optimize the allocation of urban water resources with regard to uncertainties. It thus provides essential information in planning and managing urban water reuse systems towards a more sustainable urban water resources management. An application was presented in order to demonstrate the modeling process and to analyze the impact of uncertainties.Item Transparent Decision Support Using Statistical Evidence(University of Waterloo, 2005) Hamilton-Wright, AndrewAn automatically trained, statistically based, fuzzy inference system that functions as a classifier is produced. The hybrid system is designed specifically to be used as a decision support system. This hybrid system has several features which are of direct and immediate utility in the field of decision support, including a mechanism for the discovery of domain knowledge in the form of explanatory rules through the examination of training data; the evaluation of such rules using a simple probabilistic weighting mechanism; the incorporation of input uncertainty using the vagueness abstraction of fuzzy systems; and the provision of a strong confidence measure to predict the probability of system failure.
Analysis of the hybrid fuzzy system and its constituent parts allows commentary on the weighting scheme and performance of the "Pattern Discovery" system on which it is based.
Comparisons against other well known classifiers provide a benchmark of the performance of the hybrid system as well as insight into the relative strengths and weaknesses of the compared systems when functioning within continuous and mixed data domains.
Classifier reliability and confidence in each labelling are examined, using a selection of both synthetic data sets as well as some standard real-world examples.
An implementation of the work-flow of the system when used in a decision support context is presented, and the means by which the user interacts with the system is evaluated.
The final system performs, when measured as a classifier, comparably well or better than other classifiers. This provides a robust basis for making suggestions in the context of decision support.
The adaptation of the underlying statistical reasoning made by casting it into a fuzzy inference context provides a level of transparency which is difficult to match in decision support. The resulting linguistic support and decision exploration abilities make the system useful in a variety of decision support contexts.
Included in the analysis are case studies of heart and thyroid disease data, both drawn from the University of California, Irvine Machine Learning repository.Item Towards a Versatile System for the Visual Recognition of Surface Defects(University of Waterloo, 2005) Koprnicky, MiroslavAutomated visual inspection is an emerging multi-disciplinary field with many challenges; it combines different aspects of computer vision, pattern recognition, automation, and control systems. There does not exist a large body of work dedicated to the design of generalized visual inspection systems; that is, those that might easily be made applicable to different product types. This is an important oversight, in that many improvements in design and implementation times, as well as costs, might be realized with a system that could easily be made to function in different production environments.
This thesis proposes a framework for generalizing and automating the design of the defect classification stage of an automated visual inspection system. It involves using an expandable set of features which are optimized along with the classifier operating on them in order to adapt to the application at hand. The particular implementation explored involves optimizing the feature set in disjoint sets logically grouped by feature type to keep search spaces reasonable. Operator input is kept at a minimum throughout this customization process, since it is limited only to those cases in which the existing feature library cannot adequately delineate the classes at hand, at which time new features (or pools) may have to be introduced by an engineer with experience in the domain.
Two novel methods are put forward which fit well within this framework: cluster-space and hybrid-space classifiers. They are compared in a series of tests against both standard benchmark classifiers, as well as mean and majority vote multi-classifiers, on feature sets comprised of just the logical feature subsets, as well as the entire feature sets formed by their union. The proposed classifiers as well as the benchmarks are optimized with both a progressive combinatorial approach and with an genetic algorithm. Experimentation was performed on true colour industrial lumber defect images, as well as binary hand-written digits.
Based on the experiments conducted in this work, it was found that the sequentially optimized multi hybrid-space methods are capable of matching the performances of the benchmark classifiers on the lumber data, with the exception of the mean-rule multi-classifiers, which dominated most experiments by approximately 3% in classification accuracy. The genetic algorithm optimized hybrid-space multi-classifier achieved best performance however; an accuracy of 79. 2%.
The numeral dataset results were less promising; the proposed methods could not equal benchmark performance. This is probably because the numeral feature-sets were much more conducive to good class separation, with standard benchmark accuracies approaching 95% not uncommon. This indicates that the cluster-space transform inherent to the proposed methods appear to be most useful in highly dependant or confusing feature-spaces, a hypothesis supported by the outstanding performance of the single hybrid-space classifier in the difficult texture feature subspace: 42. 6% accuracy, a 6% increase over the best benchmark performance.
The generalized framework proposed appears promising, because classifier performance over feature sets formed by the union of independently optimized feature subsets regularly met and exceeded those classifiers operating on feature sets formed by the optimization of the feature set in its entirety. This finding corroborates earlier work with similar results [3, 9], and is an aspect of pattern recognition that should be examined further.Item Cleanup Memory in Biologically Plausible Neural Networks(University of Waterloo, 2005) Singh, RaymonDuring the past decade, a new class of knowledge representation has emerged known as structured distributed representation (SDR). A number of schemes for encoding and manipulating such representations have been developed; e. g. Pollack's Recursive Auto-Associative Memory (RAAM), Kanerva's Binary Spatter Code (BSC), Gayler's MAP encoding, and Plate's Holographically Reduced Representations (HRR). All such schemes encode structural information throughout the elements of high dimensional vectors, and are manipulated with rudimentary algebraic operations.
Most SDRs are very compact; components and compositions of components are all represented as fixed-width vectors. However, such compact compositions are unavoidably noisy. As a result, resolving constituent components requires a cleanup memory. In its simplest form, cleanup is performed with a list of vectors that are sequentially compared using a similarity metric. The closest match is deemed the cleaned codevector.
While SDR schemes were originally designed to perform cognitive tasks, none of them have been demonstrated in a neurobiologically plausible substrate. Potentially, mathematically proven properties of these systems may not be neurally realistic. Using Eliasmith and Anderson's (2003) Neural Engineering Framework, I construct various spiking neural networks to simulate a general cleanup memory that is suitable for many schemes.
Importantly, previous work has not taken advantage of parallelization or the high-dimensional properties of neural networks. Nor have they considered the effect of noise within these systems. As well, additional improvements to the cleanup operation may be possible by more efficiently structuring the memory itself. In this thesis I address these lacuna, provide an analysis of systems accuracy, capacity, scalability, and robustness to noise, and explore ways to improve the search efficiency.Item Cooperative Water Resources Allocation among Competing Users(University of Waterloo, 2005) Wang, LizhongA comprehensive model named the Cooperative Water Allocation Model (CWAM) is developed for modeling equitable and efficient water allocation among competing users at the basin scale, based on a multiperiod node-link river basin network. The model integrates water rights allocation, efficient water allocation and equitable income distribution subject to hydrologic constraints comprising both water quantity and quality considerations. CWAM allocates water resources in two steps: initial water rights are firstly allocated to water uses based on legal rights systems or agreements, and then water is reallocated to achieve efficient use of water through water transfers. The associated net benefits of stakeholders participating in a coalition are allocated by using cooperative game theoretical approaches.
The first phase of the CWAM methodology includes three methods for deriving initial water rights allocation among competing water uses, namely the priority-based multiperiod maximal network flow (PMMNF) programming, modified riparian water rights allocation (MRWRA) and lexicographic minimax water shortage ratios (LMWSR) methods. PMMNF is a very flexible approach and is applicable under prior, riparian and public water rights systems with priorities determined by different criteria. MRWRA is essentially a special form of PMMNF adapted for allocation under the riparian regime. LMWSR is designed for application under a public water rights system, which adopts the lexicographic minimax fairness concept. The second step comprises three sub-models: the irrigation water planning model (IWPM) is a model for deriving benefit functions of irrigation water; the hydrologic-economic river basin model (HERBM) is the core component of the coalition analysis, which searches for the values of various coalitions of stakeholders and corresponding optimal water allocation schemes, based on initial water rights, monthly net benefit functions of demand sites and the ownership of water uses; the sub-model cooperative reallocation game (CRG) of the net benefit of the grand coalition adopts cooperative game solution concepts, including the nucleolus, weak nucleolus, proportional nucleolus, normalized nucleolus and Shapley value, to perform equitable reallocation of the net benefits of stakeholders participating in the grand coalition. The economically efficient use of water under the grand coalition is achieved through water transfers based on initial water rights.
Sequential and iterative solution algorithms utilizing the primal simplex method are developed to solve the linear PMMNF and LMWSR problems, respectively, which only include linear water quantity constraints. Algorithms for nonlinear PMMNF and LMWSR problems adopt a two-stage approach, which allow nonlinear reservoir area- and elevation-storage relations, and may include nonlinear water quality constraints. In the first stage, the corresponding linear problems, excluding nonlinear constraints, are solved by a sequential or iterative algorithm. The global optimal solution obtained by the linear programming is then combined together with estimated initial values of pollutant concentrations to be used as the starting point for the sequential or iterative nonlinear programs of the nonlinear PMMNF or LMWSR problem. As HERBM adopts constant price-elasticity water demand functions to derive the net benefit functions of municipal and industrial demand sites and hydropower stations, and quadratic gross benefit functions to find the net benefit functions of agriculture water uses, stream flow demands and reservoir storages, it is a large scale nonlinear optimization problem even when the water quality constraints are not included. An efficient algorithm is built for coalition analysis, utilizing a combination of the multistart global optimization technique and gradient-based nonlinear programming method to solve a HERBM for each possible coalition.
Throughout the study, both the feasibility and the effectiveness of incorporating equity concepts into conventional economic optimal water resources management modeling are addressed. The applications of CWAM to the Amu Darya River Basin in Central Asia and the South Saskatchewan River Basin in western Canada demonstrate the applicability of the model. It is argued that CWAM can be utilized as a tool for promoting the understanding and cooperation of water users to achieve maximum welfare in a river basin and minimize the damage caused by water shortages, through water rights allocation, and water and net benefit transfers among water users under the regulated water market or administrative allocation mechanism.