A Self-Supervised Contrastive Learning Approach for Whole Slide Image Representation in Digital Pathology
Loading...
Date
2022-05-16
Authors
Ashrafi Fashi, Parsa
Advisor
Tizhoosh, Hamid
Babaie, Morteza
Babaie, Morteza
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
Digital pathology has recently expanded the field of medical image processing for di- agnostic reasons. Whole slide images (WSIs) of histopathology are often accompanied by information on the location and type of diseases and cancers displayed. Digital scanning has made it possible to create high-quality WSIs from tissue slides quickly. As a result, hospitals and clinics now have more WSI archives. As a result, rapid WSI analysis is nec- essary to meet the demands of modern pathology workflow. The advantages of pathology have increased the popularity of computerized image analysis and diagnosis.
The recent development of artificial neural networks in AI has changed the field of digital pathology. Deep learning can help pathologists segment and categorize regions and nuclei and search among WSIs for comparable morphology. However, because of the large data size of WSIs, representing digitized pathology slides has proven difficult. Furthermore, the morphological differences between diagnoses may be slim, making WSI representation problematic. Convolutional neural networks are currently being used to generate a single vector representation from a WSI (CNN). Multiple instance learning is a solution to tackle the problem of giga-pixel image representation. In multiple instance learning, all patches in a slide are combined to create a single vector representation.
Self-supervised learning has also shown impressive generalization outcomes in recent years. In self-supervised learning, a model is trained using pseudo-labels on a pretext task to improve accuracy on the main goal task. Contrastive learning is also a new scheme for self-supervision that aids the model produce more robust presentations. In this thesis, we describe a self-supervised approach that utilizes the anatomic site information provided by each WSI during tissue preparation and digitization. We exploit an Attention-based Multiple instance learning setup along with supervised contrastive learning. Furthermore, we show that using supervised contrastive learning approaches in the pretext stage improves model embedding quality in both classification and search tasks. We test our model on an image search on the TCGA depository dataset, a Lung cancer classification task and a Lung-Kidney-Stomach immunofluorescence WSI dataset.
Description
Keywords
digital pathology, representation learning, computational pathology, self-supervised learning, image search, multiple instance learning, supervised contrastive learning