KimiaNet: Training a Deep Network for Histopathology using High-Cellularity

Loading...
Thumbnail Image

Date

2020-09-11

Authors

Riasatian, Abtin

Advisor

Tizhoosh, Hamid

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

With the recent progress in deep learning, one of the common approaches to represent images is extracting deep features. A primitive way to do this is by using off-the-shelf models. However, these features could be improved through fine-tuning or even training a network from scratch by domain-specific images. This desirable task is hindered by the lack of annotated or labeled images in the field of histopathology. In this thesis, a new network, namely KimiaNet, is proposed that uses an existing dense topology but is tailored for generating informative and discriminative deep features from histopathology images for image representation. This model is trained based on the existing DenseNet-121 architecture but by using more than 240,000 image patches of 1000 ⨉ 1000 pixels acquired at 20⨉ magnification. Considering the high cost of histopathology image annotation, which makes the idea impractical at a large scale, a high-cellularity mosaic approach is suggested which could be used as a weak or soft labeling method. Patches used for training the KimiaNet are extracted from 7,126 whole slide images of formalin-fixed paraffin-embedded (FFPE) biopsy samples, spanning 30 cancer sub-types and publicly available through The Cancer Genome Atlas (TCGA) repository. The quality of features generated by KimiaNet are tested via two types of image search, (i) given a query slide, searching among all of the slides and finding the ones with the tissue type similar to the query’s and (ii) searching among slides within the query slide’s tumor type and finding slides with the same cancer sub-type as the query slide’s. Compared to the pre-trained DenseNet-121 and the fine-tuned versions, KimiaNet achieved predominantly the best results for both search modes. In order to get an intuition of how effective training from scratch is on the expressiveness of the deep features, the deep features of randomly selected patches, from each cancer subtype, are extracted using both KimiaNet and pre-trained DenseNet-121 and visualized after reducing their dimensionality using t-distributed Stochastic Neighbor Embedding (tSNE). This visualization illustrates that for KimiaNet, the instances of each class can easily be distinguished from others while for pre-trained DenseNet the instances of almost all of the classes are mixed together. This comparison is another verification to show that how discriminative training with domain-specific images has made the features. Also, four simpler networks, made up of repetitions of convolutional, batch-normalization and Rectified Linear Unit (ReLU) layers, (CBR networks) are implemented and compared against the KimiaNet to check if the network design could still be further simplified. The experiments demonstrated that KimiaNet features are by far better than CBR networks which validate the DenseNet-121 as a good candidate for KimiaNet’s architecture.

Description

Keywords

LC Keywords

Citation