Automated Macular Disease Classification from OCT Images using Feature Attention Fusion Network
Automated Macular Disease Classification from OCT Images using Feature Attention Fusion Network |
||
|
||
© 2024 by IJETT Journal | ||
Volume-72 Issue-3 |
||
Year of Publication : 2024 | ||
Author : V. Latha, Sreeni K G |
||
DOI : 10.14445/22315381/IJETT-V72I3P134 |
How to Cite?
V. Latha, Sreeni K G, "Automated Macular Disease Classification from OCT Images using Feature Attention Fusion Network," International Journal of Engineering Trends and Technology, vol. 72, no. 3, pp. 391-401, 2024. Crossref, https://doi.org/10.14445/22315381/IJETT-V72I3P134
Abstract
Optical Coherence Tomography (OCT) is a promising and essential tool for retinopathy diagnosis. Ophthalmologists use OCT images to identify, treat, and track macular diseases. Manually analysing and interpreting these illnesses from the enormous volume of OCT images takes time and effort. The Convolutional Neural Network (CNN), a potential deep learning method, has proven exceptionally accurate in classifying images, making them suitable for computerassisted diagnosis. This paper introduces a CNN-based feature extraction and attention fusion network, the FAF-Net, to classify common macular diseases. This network enhances the flexibility and accuracy of conventional CNN classification systems. The attention module strengthens pre-trained CNN models by emphasising significant features related to anatomical defects in the retina while reducing the importance of irrelevant regions. Combining deep pre-trained models with attention processes further improves classification accuracy. This study used OCT data to diagnose macular disorders using pre-trained models VGG16 and ResNet50. The proposed approach was assessed on the UCSD dataset, a publicly available OCT imaging dataset, achieving a classification accuracy of 98.40%. Additionally, Gradient-weighted Class Activation Mapping (GradCAM) was employed as a visualisation technique to assess the efficacy of the FAF-Net. These results demonstrate that the proposed FAF-Net approach substantially increases classifier performance. This method has a promising future in the medical field and provides a robust conceptual framework for diagnosing macular diseases.
Keywords
CNN, Deep learning, Feature extraction, Image classification.
References
[1] Undurti N. Das, “Diabetic Macular Edema, Retinopathy and Age-Related Macular Degeneration as Inflammatory Conditions,” Archives of Medical Science, vol. 12, no. 5, pp. 1142-1157, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Zongqing Ma et al., “HCTNet: A Hybrid ConvNet-Transformer Network for Retinal Optical Coherence Tomography Image Classification,” Biosensors, vol. 12, no. 7, pp. 1-15, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Nithya Rajagopalan et al., “Retracted Article: Deep CNN Framework for Retinal Disease Diagnosis Using Optical Coherence Tomography Images,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 7, pp. 7569-7580, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[4] David Huang et al., “Optical Coherence Tomography,” Science, vol. 254, no. 5035, pp. 1178-1181, 1991.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Meng Wang et al., “Semi-Supervised Capsule cGAN for Speckle Noise Reduction in Retinal OCT Images,” IEEE Transactions on Medical Imaging, vol. 40, no. 4, pp. 1168-1183, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Reena Chopra, Siegfried K. Wagner, and Pearse A. Keane, “Optical Coherence Tomography in the 2020s-Outside the Eye Clinic,” Eye, vol. 35, no. 1, pp. 236-243, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Prakash Kumar Karn, and Waleed H. Abdulla, “On Machine Learning in Clinical Interpretation of Retinal Diseases Using OCT Images,” Bioengineering, vol. 10, no. 4, pp. 1-24, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Akash Tayal et al., “DL-CNN-Based Approach with Image Processing Techniques for Diagnosis of Retinal Diseases,” Multimedia Systems, vol. 28, no. 4, pp. 1417-1438, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Jinyoung Han et al., “Classifying Neovascular Age-Related Macular Degeneration with a Deep Convolutional Neural Network based on Optical Coherence Tomography Images,” Scientific Reports, vol. 12, no. 2232, pp. 1-10, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Saman Sotoudeh-Paima et al., “Multiscale Convolutional Neural Network for Automated AMD Classification Using Retinal OCT Images,” Computers in Biology and Medicine, vol. 144, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Akinori Minagi, Hokuto Hirano, and Kauzhiro Takemoto, “Natural Images Allow Universal Adversarial Attacks on Medical Image Classification using Deep Neural Networks with Transfer Learning,” Journal of Imaging, vol. 8, no. 2, pp. 1-15, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Mohamed Abd Elaziz et al., “Medical Image Classification Utilising Ensemble Learning and Levy Flight-Based Honey Badger Algorithm on 6G-Enabled Internet of Things,” Computational Intelligence and Neuroscience, vol. 2022, pp. 1-17, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Kuntha Pin, Jung Woo Han, and Yunyoung Nam, “Retinal Diseases Classification based on Hybrid Ensemble Deep Learning and Optical Coherence Tomography Images,” Electronic Research Archive, vol. 31, no. 8, pp. 4843-4861, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Xiaoming Liu et al., “Joint Disease Classification and Lesion Segmentation Via One-Stage Attention-Based Convolutional Neural Network in OCT Images,” Biomedical Signal Processing and Control, vol. 71, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Karen Simonyan, and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” ArXiv, pp. 1-14, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Shagun Sharma, and Kalpna Guleria, “A Deep Learning Based Model for the Detection of Pneumonia from Chest X-ray Images using VGG-16 and Neural Networks,” Procedia Computer Science, vol. 218, pp. 357-366, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Wilson Bakasa, and Serestina Viriri, “VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction,” Journal of Imaging, vol. 9, no. 7, pp. 1-22, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Samuel Kumaresan et al., “Deep Learning-Based Weld Defect Classification using VGG16 Transfer Learning Adaptive FineTuning,” International Journal on Interactive Design and Manufacturing, vol. 17, pp. 2999-3010, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Kaiming He et al., “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770-778, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Mehdhar S.A.M. Al-Gaashani et al., “Using a Resnet50 with a Kernel Attention Mechanism for Rice Disease Diagnosis,” Life, vol. 13, no. 6, pp. 1277-1277, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Hareem Kibriya, and Rashid Amin, “A Residual Network-Based Framework for COVID-19 Detection from CXR Images,” Neural Computing and Applications, vol. 35, pp. 8503-8516, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Muhammad Haris Abid et al., “Multi-Modal Medical Image Classification using Deep Residual Network and Genetic Algorithm,” PLOS ONE, vol. 18, no. 6, pp. 1-24, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Chun‑Ling Lin, and Kun‑Chi Wu, “Development of Revised ResNet-50 for Diabetic Retinopathy Detection,” BMC Bioinformatics, vol. 24, no. 157, pp. 1-18, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-Excitation Networks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 7132-7141, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Daniel Kermany, Kang Zhang, and Michael Goldbaum, “Large Dataset of Labeled Optical Coherence Tomography (OCT) and Chest XRay Images,” Mendeley Data, vol. 172, no. 5, pp. 1122-1131, 2018.
[CrossRef] [Google Scholar] [Publisher Link]