Enhancing YOLO through Multi-Task Learning: Joint Detection, Reconstruction, and Classification of Distorted Text Images

No Thumbnail Available

Date

2025-06-12

Advisor

Naik, Kshirasagar

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Robust recognition of alphanumeric text mounted on vehicle surfaces may present significant challenges. These real-world challenges include conditions such as motion blur, out-of-focus imagery, variation in illumination, and compression artifacts. Existing automatic license plate recognition (ALPR) pipelines usually separate detection, enhancement, and recognition into distinct stages, either relying on explicit deblurring networks or extensive augmentation for generalization, each incurring latency, error propagation, or a performance ceiling on severely degraded inputs. This study introduces YOLO CRNet, a unified end-to-end multi-task framework built upon the YOLO object detector, designed to simultaneously localize characters, enhance text regions, and perform optical character recognition (OCR) within a single network. We integrate two specialized heads into the YOLO backbone: a reconstruction head that restores degraded text regions, and a classification head that directly recognizes alphanumeric characters. Shared feature representations are extracted from multiple depths of the core YOLO network for synergistic learning across complementary tasks. To inform feature selection for the classifier head, we extract per‑character embeddings from five different layer combinations of the YOLO network (ranging from early backbone to deep neck layers) and visualize class separability via t‑SNE. This analysis reveals that Configuration A which comprises of early backbone layers (1,2,4) with neck layers (10,13,16) yields the most distinct clusters for the alphanumeric character classes. The YOLO CRNet classifier head trained on Configuration A achieves 95.2% accuracy and a 94.97% F1‑score on a held‑out set of 10,100 sharp character crops, outperforming alternative layer configurations by up to 18%. Extensive experiments on blurred text datasets demonstrate that combined reconstruction followed by classification of YOLO CRNet significantly outperforms both the baseline YOLO detector and the YOLO CRNet classification head. In particular, the combined reconstruction followed by classification configuration achieves a 23.5% relative improvement in classification accuracy (from 44.5% to 68.0%) and a 15.5% gain in F1-score (from 0.550 to 0.705). By integrating detection, enhancement, and recognition into a single network guided by t‑SNE based feature selection, YOLO CRNet reduces latency, mitigates error propagation, and explicitly handles image distortions. This work lays a foundation for real‑time, robust vehicle text detection and illustrates the power of multi‑task learning and data‑driven feature analysis in fine‑grained text recognition tasks.

Description

Keywords

reconstruction, classification, YOLO, multi-task learning, t-SNE

LC Subject Headings

Citation