An Efficient Self-Refinement and Reconstruction Network for Image Denoising

An Efficient Self-Refinement and Reconstruction Network for Image Denoising

Jinqiang Xue, Qin Wu
DOI: 10.4018/IJITSA.321456
Article PDF Download
Open access articles are freely available for download

Abstract

Recent works tend to design effective but deep and complex denoising networks, which usually ignored the industrial requirement of efficiency. In this paper, an effective and efficient self-refinement and reconstruction network (SRRNet) is proposed for image denoising. It is based on the encoder-decoder architecture and three improvements are introduced to solve the problem. Specifically, four novel residual connections of different types are proposed as building blocks to maintain original contextual details. A high-resolution reconstruction module is introduced to connect cross-level encoders and corresponding decoders, so as to boost information flow and result in realistic clear image. And multi-scale dual attention is used for suppressing noise and enhancing beneficial dependency. SRRNet achieves PSNR of 39.83 dB and 39.96 dB on SIDD and DND respectively. Compared with other works, the accuracy is higher and the complexity is lower. Extensive experiments in real-world image denoising and Gaussian noise removal prove that SRRNet balances performance and temporal cost better.
Article Preview
Top

Introduction

Image noise reduction task aims to remove useless noisy information from a given degraded noisy image and restore a clear image close to the real world. As a dense prediction task with pixel-by-pixel output with infinite possible outcomes for complex noise scenes in reality, image denoising is challenging to some extent. With the successful development of convolutional neural networks (CNNs) and deep learning, recent outstanding approaches employ CNNs to adaptively capture the essential correlation between noisy images and clear images from large-scale data sets and apply the trained prior parameters to reconstruct noisy images into clear images close to the real world.

In order to expand the receptive field and better extract contextual details of the feature, many works (Chang et al., 2020; Guo et al., 2019) designed U-shaped encoder–decoder-based (Ronneberger et al., 2015; Isola et al., 2017) architectures to hierarchically extract deep feature maps and reconstruct a clear image from coarse to fine. Other works (Zamir et al., 2020a; Zamir et al., 2020b; Anwar & Barnes, 2019) paid attention to maintaining details of high resolution rather than using downsampling to expand the receptive field and process the feature map at the original resolution. Recently, a novel design that stacks several subnetworks and constructs a multistage network (Zamir et al., 2021) was proposed to progressively restore clear image stage by stage.

On the one hand, the essential properties of image-denoising tasks are explored, and specific denoisers are designed (Cheng et al., 2021). On the other hand, thanks to the successful development of self-attention in Natural Language Processing (NLP), the convolution block is replaced with the shifted windows (Swin) Transformer block (Wang et al., 2021; Liu et al., 2021) to capture long-range dependency and construct generalized denoiser.

However, the encoder–decoder-based methods are efficient, but the result is relatively poor, and other methods proved effective but very time-consuming. Due to the development of industrial cameras and mobile phones, the requirement for recovering clear images at little temporal cost is rapidly growing. Balancing performance and temporal cost needs to be addressed urgently. Therefore, the motivation and objective of this study were improving the traditional encoder–decoder-based architecture and exploring effective and efficient modules to make up for the deficiencies of accuracy so as to achieve a balance between performance and temporal requirements. This study was expected to encourage further research to explore effective and efficient denoising algorithms, considering the specific implementation of the algorithm in applied products.

To solve this problem, this study reinforced the interaction of information flow on the basis of the traditional encoder-decoder structure. Specifically, cross-level encoders are used to progressively extract self-refined features from coarse to fine. And the corresponding decoders with high-resolution reconstruction modules are passed to restore clear images hierarchically without losing the original characteristics. Then noise and signal of deep levels are discriminated without destroying the structure by multiscale dual attention blocks.

As vividly illustrated in Figure 1, the proposed self-refinement and reconstruction network (SRRNet) achieved excellent denoising accuracy with little temporal cost. The primary contributions of this paper are as below:

  • A fast encoder–decoder-based self-refinement and reconstruction network (SRRNet) is proposed for image-denoising, which balances the performance and the temporal cost.

  • A contextual self-refinement block (CSRB) is designed as the building block, which boosts information exchange and self-refining contextual details.

  • A high-resolution reconstruction module (HRRM) is explored to reconstruct clear and high-resolution features under the guidance of a shallow information flow.

  • A multiscale dual attention block (MDAB) is introduced to capture cross-scale information and concentrate on useful local details at different dimensions. A large number of comparative and ablation experiments are conducted to confirm the efficiency and effectiveness of SRRNets both in real-world image denoising and synthetic Gaussian denoising (Zhou et al., 2020).

Complete Article List

Search this Journal:
Reset
Volume 17: 1 Issue (2024)
Volume 16: 3 Issues (2023)
Volume 15: 3 Issues (2022)
Volume 14: 2 Issues (2021)
Volume 13: 2 Issues (2020)
Volume 12: 2 Issues (2019)
Volume 11: 2 Issues (2018)
Volume 10: 2 Issues (2017)
Volume 9: 2 Issues (2016)
Volume 8: 2 Issues (2015)
Volume 7: 2 Issues (2014)
Volume 6: 2 Issues (2013)
Volume 5: 2 Issues (2012)
Volume 4: 2 Issues (2011)
Volume 3: 2 Issues (2010)
Volume 2: 2 Issues (2009)
Volume 1: 2 Issues (2008)
View Complete Journal Contents Listing