Show simple item record

dc.contributor.authorHu, Xiaodan
dc.date.accessioned2019-04-18 18:31:47 (GMT)
dc.date.available2020-04-18 04:50:09 (GMT)
dc.date.issued2019-04-18
dc.date.submitted2019-04-15
dc.identifier.urihttp://hdl.handle.net/10012/14540
dc.description.abstractFor any projection system, one goal will surely be to maximize the quality of projected imagery at a minimized hardware cost, which is considered a challenging engineering problem. Experience in applying different image filters and enhancements to projected video suggests quite clearly that the quality of a projected enhanced video is very much a function of the content of the video itself. That is, to first order, whether the video contains content which is moving as opposed to still plays an important role in the video quality, since the human visual system tolerates much more blur in moving imagery but at the same time is significantly sensitive to the flickering and aliasing caused by moving sharp textures. Furthermore, the spatial and statistical characteristics of text and non-text images are quite distinct. We would, therefore, assert that the text-like, moving and background pixels of a given video stream should be enhanced differently using class-dependent video enhancement filters to achieve maximum visual quality. In this thesis, we present a novel text-dependent content enhancement scheme, a novel motion-dependent content enhancement scheme and a novel content-adaptive resolution enhancement scheme based on a text-like / non-text-like classification and a pixel-wise moving / non-moving classification, with the actual enhancement obtained via class--dependent Wiener deconvolution filtering. Given an input image, the text and motion detection methods are used to generate binary masks to indicate the location of the text and moving regions in the video stream. Then enhanced images are obtained by applying a plurality of class-dependent enhancement filters, with text-like regions sharpened more than the background and moving regions sharpened less than the background. Later, one or more resulting enhanced images are combined into a composite output image based on the corresponding mask of different features. Finally, a higher resolution projected video stream is conducted by controlling one or more projectors to project the plurality of output frame streams in a rapid overlapping way. Experimental results on the test images and videos show that the proposed schemes all offer improved visual quality over projection without enhancement as well as compared to a recent state-of-the-art enhancement method. Particularly, the proposed content-adaptive resolution enhancement scheme increases the PSNR value by at least 18.2% and decreases MSE value by at least 25%.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectprojector resolution enhancementen
dc.titleContent-Adaptive Non-Stationary Projector Resolution Enhancementen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentSystems Design Engineeringen
uws-etd.degree.disciplineSystem Design Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms1 yearen
uws.contributor.advisorFieguth, Paul
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages