A New Intra Fine-Tuning Method Between Histopathological Datasets in Deep Learning

A New Intra Fine-Tuning Method Between Histopathological Datasets in Deep Learning

Nassima Dif, Zakaria Elberrichi
DOI: 10.4018/IJSSMET.2020040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article presents a new fine-tuning framework for histopathological images analysis. Despite the most common solutions where the ImageNet models are reused for image classification, this research sets out to perform an intra-domain fine tuning between the trained models on the histopathological images. The purpose is to take advantage of the hypothesis on the efficiency of transfer learning between non-distant datasets and to examine for the first time these suggestions on the histopathological images. The Inception-v3 convolutional neural network architecture, six histopathological source datasets, and four target sets as base modules were used in this article. The obtained results reveal the importance of the pre-trained histopathological models compared to the ImageNet model. In particular, the ICIAR 2018-A presented a high-quality source model for the various target tasks due to its capacity in generalization. Finally, the comparative study with the other literature results shows that the proposed method achieved the best results on both CRC (95.28%) and KIMIA-PATH (98.18%) datasets.
Article Preview
Top

1. Introduction

The purpose of computer vision is to enable computers to analyze images based on an appropriate machine learning algorithm. In deep learning, convolutional neural networks (CNNs) are inspired from the cat's visual cortex (Humphrey et al., 1985), where the developed hierarchical model is composed of simple cells (S) and complex cells (C). These structures are activated when detecting basic and complex forms respectively. Previously, few studies have attempted to extend the proposed hierarchical model for both unsupervised (Fukushima, & Miyake, 1982) and supervised (LeCun et al., 1998) classification. Until 2012, where the AlexNet architecture achieved a surprising error rate on the ImageNet benchmark based on the graphical processing units (GPUs) (Krizhevsky et al., 2012). This success provided important insights into the efficiency of CNNs in computer vision. Hence, CNNs became a common trend in computer vision, where various optimized architectures have been proposed to reduce overfitting problems (Simonyan et al., 2015). The performance of these architectures depends on the large size of the training set to adjust millions of parameters. Despite their long success, CNNs are prone to overfitting on the small volumes of data such as medical datasets.

Moreover, the training from scratch is time-consuming and computationally demanding. Transfer learning is among the widely used techniques to solve the different issues related to training from scratch. Unlike the traditional machine learning algorithms, the hierarchical nature of DNNs advantage the exploitation of the transfer learning technique between various domains. This strategy is categorized into computational intelligence-based transfer learning, neural network-based transfer learning and Bayes-based transfer learning (Lu et al., 2015). Transferring the knowledge between CNNs belongs to the deep transfer learning category, where the network's layers are transferred from a source task to another target task. Then, there are three ways for using the generated model: a. freezing all layers and reusing the CNN model as a feature extractor (Khan et al., 2019), b. freezing a subset of the first layers and fine-tune the last layers (Tajbakhsh et al., 2016) and c. fine-tuning all layers. The study presented here explores specifically the second strategy, where the first layers are frozen due to the similarly between low level or general features within heterogeneous tasks, such as Gabor filters or color blobs (Ng et al., 2015). On the other hand, in deep layers, features are particular to the source task, here the process of fine-tuning is important to adjust features on the new target task.

The purpose of this paper is to exploit the transfer learning and fine-tuning techniques for histopathological image analysis, because of the small volumes of data compared to the ImageNet benchmark. However, in contrast to the previous investigations that transfer the knowledge from the ImageNet models to the other histopathological classification tasks, this study provides the first extensive examination of transferability between two histopathological tasks. Overall, this research set out to explore the influence of transfer learning between histopathological tasks, test the hypothesis that suggests the efficiency of transfer learning between non-distant datasets (Yosinski et al., 2016), and compare between the transferability from ImageNet to another histopathological task and between two histopathological tasks.

The remaining part of the paper proceeds as follows: section 2 presents the related works. Section 3 describes the exploited datasets. Section 4 explains the proposed approach. Section 5 presents and discusses the obtained results and the last section concludes this work.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing