Perceptual Video Quality Assessment and Enhancement

dc.comment.hiddenUpon transferring copyright to IEEE, authors and/or their companies have the right to post their IEEE-copyrighted material on their own servers without permission, provided that the server displays a prominent notice alerting readers to their obligations with respect to copyrighted material and that the posted work includes an IEEE copyright notice.en
dc.contributor.authorZeng, Kai
dc.date.accessioned2013-08-21T18:20:20Z
dc.date.available2013-08-21T18:20:20Z
dc.date.issued2013-08-21T18:20:20Z
dc.date.submitted2013-08-12
dc.description.abstractWith the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality. This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels.en
dc.identifier.urihttp://hdl.handle.net/10012/7720
dc.language.isoenen
dc.pendingfalseen
dc.publisherUniversity of Waterlooen
dc.subjectVideo Quality Assessmenten
dc.subjectVideo Denoisingen
dc.subject.programElectrical and Computer Engineeringen
dc.titlePerceptual Video Quality Assessment and Enhancementen
dc.typeDoctoral Thesisen
uws-etd.degreeDoctor of Philosophyen
uws-etd.degree.departmentElectrical and Computer Engineeringen
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Zeng_Kai.pdf
Size:
3 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
242 B
Format:
Item-specific license agreed upon to submission
Description: