Affiliations 

  • 1 Faculty of Science and Engineering, School of Computer Science, University of Nottingham, Jalan Broga, 43500, Semenyih Selangor Darul Ehsan, Malaysia. hcxwl1@nottingham.edu.my
  • 2 Faculty of Science and Engineering, School of Computer Science, University of Nottingham, Jalan Broga, 43500, Semenyih Selangor Darul Ehsan, Malaysia
Med Biol Eng Comput, 2022 Mar;60(3):633-642.
PMID: 35083634 DOI: 10.1007/s11517-021-02487-8

Abstract

Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection techniques. The DL models are known as black boxes, despite the fact that they are widely adopted. They make no attempt to explain how the model learns representations or why it makes a particular prediction. Due to the black box design architecture, DL methods make it difficult for intended end-users like ophthalmologists to grasp how the models function, preventing model acceptance for clinical usage. Recently, several studies on the interpretability of DL methods used in DR-related tasks such as DR classification and segmentation have been published. The goal of this paper is to provide a detailed overview of interpretability strategies used in DR-related tasks. This paper also includes the authors' insights and future directions in the field of DR to help the research community overcome research problems.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.