Publikasi Scopus FKUI 2021 per tanggal 31 Agustus 2021 (582 artikel)

Amalia R., Bustamam A., Yudantha A.R., Victor A.A.
57226194312;36815737800;55489644900;57191055282;
Diabetic retinopathy detection and captioning based on lesion features using deep learning approach
2021
Communications in Mathematical Biology and Neuroscience
2021
59
Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Indonesia, Depok, 16424, Indonesia; Department of Ophthalmology, Faculty of Medicine, Universitas Indonesia Cipto Mangunkusumo National General Hospital, Jakarta Pusat, 10430, Indonesia
Amalia, R., Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Indonesia, Depok, 16424, Indonesia; Bustamam, A., Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Indonesia, Depok, 16424, Indonesia; Yudantha, A.R., Department of Ophthalmology, Faculty of Medicine, Universitas Indonesia Cipto Mangunkusumo National General Hospital, Jakarta Pusat, 10430, Indonesia; Victor, A.A., Department of Ophthalmology, Faculty of Medicine, Universitas Indonesia Cipto Mangunkusumo National General Hospital, Jakarta Pusat, 10430, Indonesia
Diabetic Retinopathy (DR) can lead to vision loss if the patient does not get effective treatment based on the patient’s condition. Early detection is needed to know what an effective treatment for those patients is. For helping ophthalmologists, DR detection methods using computer-based were developed. Ophthalmologists can use the result of the method as a consideration in diagnosing the class of DR. One of the powerful methods is deep learning. The proposed method uses two deep learning architectures, namely Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), for DR detection. CNN is used to detect DR lesion features, and RNN is used for captioning based on those lesion features. We used three pre-trained CNN models, including AlexNet, VGGNet and GoogleNet, and used Long Short-Term Memory (LSTM) as RNN models. In the image preprocessing, we applied contrast enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE) and compared the results with those without CLAHE. We have done the training and testing process with a different proportion of data. The experimental results show that our proposed method can detect the lesion features and generate caption with the highest average accuracy of 96.12% for GoogleNet and LSTM with CLAHE and the proportion 70% training data 30% testing data. © 2021 the author(s).
Convolutional neural network (CNN); Deep learning; Diabetic retinopathy; Long short-term memory (LSTM)
SCIK Publishing Corporation
20522541
Article
Q4
189
20081