Article Preview
Top1. Introduction
Diabetic retinopathy is one of the most dangerous consequences of diabetes, in which the retina is damaged and blindness occurs (Wang et. al., 2014). It causes the blood vessels in the retinal tissue to leak fluid, causing vision distortion. According to statistics from the United States, the United Kingdom, and Singapore, DR is one of the most common conditions, alongside diseases that cause blindness such as cataracts and glallcoma (NCHS, 2019; SNEC, 2019).
Diabetic retinopathy (DR) has four stages: mild non-proliferative retinopathy, which is the earliest stage; moderate non-proliferative retinopathy, which is characterised by a loss of blood vessel ability to transport blood; severe non-proliferative retinopathy, which causes a lack of blood supply to the retina due to increased blockage of more blood vessels; and finally, proliferative diabetic retinopathy, which is the most advanced stage (Mrinal, 2015). Because each stage has its own traits and properties, doctors may overlook some of them and make an inaccurate diagnosis as a result. As a result, the concept of developing an autonomous DR detection system emerges. With good and quick treatment and eye monitoring, at least 56% of new cases of this condition could be avoided (Roychowdlhury et. al., 2014). However, there are no warning indications in the early stages of this illness, making early detection extremely difficult. Furthermore, even highly experienced practitioners were occasionally unable to physically analyse and evaluate the stage of a patient's fundus from diagnostic photographs (Melinscak et. al.,2015). At the same time, when lesions are visible, doctors will almost always agree. Furthermore, current diagnostic methods are inefficient due to the length of time required and the number of ophthalmologists involved in the patient's problem resolution. Such sources of disagreement lead to incorrect diagnoses and shaky ground-truth for automated solutions that were intended to aid in the research stage.
DR detection methods started to arise. The earliest algorithms were based on various traditional computer vision methods for threshold setting (Wang et. al., 20 I 4). Nonetheless, deep learning systems (Zhiguang et al., 2019) have demonstrated their superiority over other algorithms in classification and object detection tasks in recent years (Gardner et. al., 1996). Convolutional neural networks (CNN) have been used successfully in a variety of fields, including the detection of diabetic retinopathy.
The model is currently constructed using a binary cross-entropy loss function with RAdam optimizer and cosine annealing learning rate scheduler in the existing system. The ResNet50 architecture was utilised as a basic model in this paper. Resnet50, DenseNet21, EfficientNetB5, VGG 16, and Inception V3 were used to test DR detection, and it was discovered that DenseNet21, and EfficientNetB5 gave the best accuracy and minimal loss. In this study, multitask learning strategy was employed to determine and classify diabetic retinopathy, and the inferences are discussed.
Multi-task learning is a subfield of machine learning in which a shared model learns numerous tasks at the same time. Improved data efficiency, less overfitting through common representations, and fast learning by using auxiliary information are all advantages of such techniques. However, learning many tasks at the same time has new design and optimization issues, and deciding which activities should be learned together is a difficult problem in and of itself.