A Self-Supervised Contrastive Learning Approach for Whole Slide Image Representation in Digital Pathology
MetadataShow full item record
Digital pathology has recently expanded the field of medical image processing for di- agnostic reasons. Whole slide images (WSIs) of histopathology are often accompanied by information on the location and type of diseases and cancers displayed. Digital scanning has made it possible to create high-quality WSIs from tissue slides quickly. As a result, hospitals and clinics now have more WSI archives. As a result, rapid WSI analysis is nec- essary to meet the demands of modern pathology workflow. The advantages of pathology have increased the popularity of computerized image analysis and diagnosis. The recent development of artificial neural networks in AI has changed the field of digital pathology. Deep learning can help pathologists segment and categorize regions and nuclei and search among WSIs for comparable morphology. However, because of the large data size of WSIs, representing digitized pathology slides has proven difficult. Furthermore, the morphological differences between diagnoses may be slim, making WSI representation problematic. Convolutional neural networks are currently being used to generate a single vector representation from a WSI (CNN). Multiple instance learning is a solution to tackle the problem of giga-pixel image representation. In multiple instance learning, all patches in a slide are combined to create a single vector representation. Self-supervised learning has also shown impressive generalization outcomes in recent years. In self-supervised learning, a model is trained using pseudo-labels on a pretext task to improve accuracy on the main goal task. Contrastive learning is also a new scheme for self-supervision that aids the model produce more robust presentations. In this thesis, we describe a self-supervised approach that utilizes the anatomic site information provided by each WSI during tissue preparation and digitization. We exploit an Attention-based Multiple instance learning setup along with supervised contrastive learning. Furthermore, we show that using supervised contrastive learning approaches in the pretext stage improves model embedding quality in both classification and search tasks. We test our model on an image search on the TCGA depository dataset, a Lung cancer classification task and a Lung-Kidney-Stomach immunofluorescence WSI dataset.
Cite this version of the work
Parsa Ashrafi Fashi (2022). A Self-Supervised Contrastive Learning Approach for Whole Slide Image Representation in Digital Pathology. UWSpace. http://hdl.handle.net/10012/18279
Showing items related by title, author, creator and subject.
Vandenhof, Colin (University of Waterloo, 2020-05-15)Reinforcement learning (RL) is a powerful tool for developing intelligent agents, and the use of neural networks makes RL techniques more scalable to challenging real-world applications, from task-oriented dialogue systems ...
Song, Haobei (University of Waterloo, 2019-09-12)The exploration/exploitation dilemma is a fundamental but often computationally intractable problem in reinforcement learning. The dilemma also impacts data efficiency which can be pivotal when the interactions between the ...
Sucholutsky, Ilia (University of Waterloo, 2021-06-15)The tremendous recent growth in the fields of artificial intelligence and machine learning has largely been tied to the availability of big data and massive amounts of compute. The increasingly popular approach of training ...