Affiliations 

  • 1 Center for Research & Innovation, CoRI, Universiti Kuala Lumpur, Kuala Lumpur, Malaysia
  • 2 Computer Sciences Program, Turabah University College, Taif University, Taif, Saudi Arabia
  • 3 Research Chair of Pervasive and Mobile Computing, Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
  • 4 Faculty of Pharmacy, Gomal University, Dera Ismail Khan, Pakistan
Front Public Health, 2022;10:855254.
PMID: 35321193 DOI: 10.3389/fpubh.2022.855254

Abstract

Deep neural networks have made tremendous strides in the categorization of facial photos in the last several years. Due to the complexity of features, the enormous size of the picture/frame, and the severe inhomogeneity of image data, efficient face image classification using deep convolutional neural networks remains a challenge. Therefore, as data volumes continue to grow, the effective categorization of face photos in a mobile context utilizing advanced deep learning techniques is becoming increasingly important. In the recent past, some Deep Learning (DL) approaches for learning to identify face images have been designed; many of them use convolutional neural networks (CNNs). To address the problem of face mask recognition in facial images, we propose to use a Depthwise Separable Convolution Neural Network based on MobileNet (DWS-based MobileNet). The proposed network utilizes depth-wise separable convolution layers instead of 2D convolution layers. With limited datasets, the DWS-based MobileNet performs exceptionally well. DWS-based MobileNet decreases the number of trainable parameters while enhancing learning performance by adopting a lightweight network. Our technique outperformed the existing state of the art when tested on benchmark datasets. When compared to Full Convolution MobileNet and baseline methods, the results of this study reveal that adopting Depthwise Separable Convolution-based MobileNet significantly improves performance (Acc. = 93.14, Pre. = 92, recall = 92, F-score = 92).

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.