Reinforcement Learning for Parameter Control of Image-Based Applications
dc.contributor.author | Taylor, Graham | en |
dc.date.accessioned | 2006-08-22T13:49:53Z | |
dc.date.available | 2006-08-22T13:49:53Z | |
dc.date.issued | 2004 | en |
dc.date.submitted | 2004 | en |
dc.description.abstract | The significant amount of data contained in digital images present barriers to methods of learning from the information they hold. Noise and the subjectivity of image evaluation further complicate such automated processes. In this thesis, we examine a particular area in which these difficulties are experienced. We attempt to control the parameters of a multi-step algorithm that processes visual information. A framework for approaching the parameter selection problem using reinforcement learning agents is presented as the main contribution of this research. We focus on the generation of state and action space, as well as task-dependent reward. We first discuss the automatic determination of fuzzy membership functions as a specific case of the above problem. Entropy of a fuzzy event is used as a reinforcement signal. Membership functions representing brightness have been automatically generated for several images. The results show that the reinforcement learning approach is superior to an existing simulated annealing-based approach. The framework has also been evaluated by optimizing ten parameters of the text detection for semantic indexing algorithm proposed by Wolf et al. Image features are defined and extracted to construct the state space. Generalization to reduce the state space is performed with the fuzzy ARTMAP neural network, offering much faster learning than in the previous tabular implementation, despite a much larger state and action space. Difficulties in using a continuous action space are overcome by employing the DIRECT method for global optimization without derivatives. The chosen parameters are evaluated using metrics of recall and precision, and are shown to be superior to the parameters previously recommended. We further discuss the interplay between intermediate and terminal reinforcement. | en |
dc.format | application/pdf | en |
dc.format.extent | 1048508 bytes | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | http://hdl.handle.net/10012/832 | |
dc.language.iso | en | en |
dc.pending | false | en |
dc.publisher | University of Waterloo | en |
dc.rights | Copyright: 2004, Taylor, Graham. All rights reserved. | en |
dc.subject | Systems Design | en |
dc.subject | reinforcement learning | en |
dc.subject | artificial neural networks | en |
dc.subject | image processing | en |
dc.subject | computer vision | en |
dc.subject | text detection | en |
dc.subject | artificial intelligence | en |
dc.subject | machine learning | en |
dc.subject | parameter control | en |
dc.subject | optimization | en |
dc.subject | Markov decision processes | en |
dc.subject | fuzzy ARTMAP | en |
dc.title | Reinforcement Learning for Parameter Control of Image-Based Applications | en |
dc.type | Master Thesis | en |
uws-etd.degree | Master of Applied Science | en |
uws-etd.degree.department | Systems Design Engineering | en |
uws.peerReviewStatus | Unreviewed | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |
Files
Original bundle
1 - 1 of 1