Recognizing Magnification Levels in Microscopic Snapshots using Machine Learning
MetadataShow full item record
State-of-of-the-art computer vision research has facilitated technology evolution in the field of medical imaging. The primary achievement of the imaging algorithms developed is the extraction of expressive features from digital images. The real advantage of this progress can be observed when these features are utilized in the primary tasks of Content Based Image Retrieval (CBIR) and developing tissue classification systems for confirming the diagnostic results of medical images. Digital Pathology (DP) is a branch of medical imaging focused on digital images acquired from histopathology specimens (i.e., biopsy samples). The camera-mounted microscope was introduced in the late 1960s, and is one of the most popular, convenient, and effective tool to generate a digital footprint of tissues from glass slides. The introduction of Whole Slide Imaging (WSI) technology has completely changed Light Microscopy (LM). Existing datasets that are modeled using microscopic camera systems are given the least attention by the research community because of i) missing relative information, i.e., magnification level and annotations, ii) low-resolution images, and iii) easily accessible WSI slides or patches with relevant information. With the increasing demand for accurate diagnosis of diseases such as cancer, there is an imminent need to utilize the knowledge not only from WSI images but also from microscopic snapshots by using state-of-the-art Machine Learning (ML) techniques to meet the pressing demand for more reliable diagnosis. This thesis is an empirical study to investigate methods for recognizing the magnification level of microscopic images to enable their application in various tasks. Additional investigations to understand the influence of the primary site (i.e., organs) on recognizing magnification levels were conducted. Qualitative assessments of feature extraction algorithms, such as Local Binary Pattern (LBP), and several pretrained Convolution Neural Network (CNN) architectures are provided. These algorithms are used as feature extractors to comprehend knowledge at an individual magnification level from microscopic snapshots of histopathology images. The classification is conducted by three traditional classifiers, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbors (K-NN), by implementing traditional computer vision and Deep Learning (DL) algorithms to learn the magnification level associated with each microscopic snapshot. Three different datasets were used to conduct the experiments, which were evaluated using total accuracy, patient or primary site accuracy, and F1-score. The total accuracies and F1 scores were 93.26% and 0.94 for the KIMIA-MAG-5 dataset, 91.50% and 0.93 for the BreakHis dataset, and 87.11% and 0.87 for the OMAX dataset, respectively. An insight from the primary site analysis shows that the task of recognizing magnification levels in images of pleura, lungs, and breasts are straightforward.
Cite this version of the work
manit zaveri (2020). Recognizing Magnification Levels in Microscopic Snapshots using Machine Learning. UWSpace. http://hdl.handle.net/10012/16376