Research of Self-Attention in Image Segmentation

Research of Self-Attention in Image Segmentation

Fude Cao, Chunguang Zheng, Limin Huang, Aihua Wang, Jiong Zhang, Feng Zhou, Haoxue Ju, Haitao Guo, Yuxia Du
Copyright: © 2022 |Pages: 12
DOI: 10.4018/JITR.298619
Article PDF Download
Open access articles are freely available for download

Abstract

Although the traditional convolutional neural network is applied to image segmentation successfully, it has some limitations. That's the context information of the long-range on the image is not well captured. With the success of the introduction of self-attentional mechanisms in the field of natural language processing (NLP), people have tried to introduce the attention mechanism in the field of computer vision. It turns out that self-attention can really solve this long-range dependency problem. This paper is a summary on the application of self-attention to image segmentation in the past two years. And think about whether the self-attention module in this field can replace convolution operation in the future.
Article Preview
Top

I. Introduction

In the field of computer vision, image segmentation is a very important basic research direction. In general, image segmentation is to divide the pixels in the image into different parts (add different labels) according to certain rules. There are super pixels segmentation, semantic segmentation, instance segmentation, and panoramic segmentation (Ren and Malik, 2003). This paper mainly refers to semantic segmentation and instance segmentation. The former is to assign a category label to each pixel in the image (for example, cars are blue, buildings are brown, etc.). So even different people are represented by the same color, without distinguishing individuals in the same class. But the latter instance segmentation method is similar to object detection, but the output of instance segmentation is a mask instead of a bounding box. Instance segmentation does not need to label each pixel. So it only needs to find the edge contour of the object of interest and distinguish individuals.

We know that the beginning of image segmentation using deep learning is FCN (Shelhamer et al., 2017). The principle is to modify the classification convolution neural network (such as ResNet or VGG network, etc.) into a fully convolution network. FCN first enlarges the resolution of the picture, then through a series of convolution operations, and does an average pooling to the n × n feature map and finally upsamples to obtain the final prediction image. Because this network consists entirely of convolutional layers, we call it a fully convolutional network. However, a network consisting entirely of convolutional layers has a big problem which is even a large convolution kernel will only have a small perceptual domain in its implementation. However, the segmentation tasks require a very large perceptual domain. In order to effectively increase the perceptual domain, there are many convolutional networks with dilation (Deeplabs v1 (Chen et al., 2017), v2 (Chen et al., 2018), v3 (Chen et al., 2017), and v3+ (Chen et al., 2018)) and multiple pooling networks PSPNet (Zhao et al., 2017). However, neither of these two methods really establish the connection between every pixel in the image, especially the connection between long-distance pixels.

At this time, the attention mechanism has achieved very good results in NLP. So people thought of introducing attention mechanism to computer vision. Attention mechanism is the first to imitate the internal process of human observation behavior which is a mechanism to align internal experience with external sensation, thereby increasing the observation precision of some areas. The basic idea is to let the system learn to pay attention focusing only on important information and ignoring irrelevant information. So it can quickly extract important features of sparse information (Hu, 2020). It is therefore widely used in NLP tasks, especially in machine translation. The self-attention mechanism is an improvement of the attention mechanism, which reduces the dependence on external information for capturing the internal correlation of features effectively. Although the self-attention mechanism is first proposed elsewhere, it is widely used in machine translation from the paper “Attention Is All You Need”. The self-Attention here is also called Transformer (Vaswani et al., 2017). In computer vision, self-attention is to perform autonomous learning between feature maps and automatically assign weights (Mnih et al., 2014). Because in image segmentation context information is very critical, self-attention can provide useful and effective solutions to context modeling, especially the context of remote pixels.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing