Affiliations 

  • 1 Department of Telecommunications Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran
  • 2 Faculty of Industrial Engineering, Urmia University of Technology, Urmia, Iran
  • 3 School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland
  • 4 Department of Electrical Engineering, Islamic Azad University, South Tehran Branch, Tehran, Iran
  • 5 Department of Mechanical and Manufacturing Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi Selangor, Malaysia
  • 6 Institute of Visual Informatics, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
  • 7 Department of Electronics and Telecommunications, Polytechnic University, Turin, Italy
Biomed Res Int, 2021;2021:5544742.
PMID: 33954175 DOI: 10.1155/2021/5544742

Abstract

The COVID-19 pandemic is a global, national, and local public health concern which has caused a significant outbreak in all countries and regions for both males and females around the world. Automated detection of lung infections and their boundaries from medical images offers a great potential to augment the patient treatment healthcare strategies for tackling COVID-19 and its impacts. Detecting this disease from lung CT scan images is perhaps one of the fastest ways to diagnose patients. However, finding the presence of infected tissues and segment them from CT slices faces numerous challenges, including similar adjacent tissues, vague boundary, and erratic infections. To eliminate these obstacles, we propose a two-route convolutional neural network (CNN) by extracting global and local features for detecting and classifying COVID-19 infection from CT images. Each pixel from the image is classified into the normal and infected tissues. For improving the classification accuracy, we used two different strategies including fuzzy c-means clustering and local directional pattern (LDN) encoding methods to represent the input image differently. This allows us to find more complex pattern from the image. To overcome the overfitting problems due to small samples, an augmentation approach is utilized. The results demonstrated that the proposed framework achieved precision 96%, recall 97%, F score, average surface distance (ASD) of 2.8 ± 0.3 mm, and volume overlap error (VOE) of 5.6 ± 1.2%.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.