Affiliations 

  • 1 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia; Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW Sydney, New South Wales 2052, Australia; Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, 43000 Kajang, Selangor Darul Ehsan, Malaysia
  • 2 Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia
  • 3 Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW Sydney, New South Wales 2052, Australia
  • 4 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
  • 5 Department of Medicine, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia; Department Medical Sciences, Faculty of Healthcare and Medical Sciences, Sunway University, 47500 Bandar Sunway, Malaysia
  • 6 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia. Electronic address: einly_lim@um.edu.my
Comput Methods Programs Biomed, 2020 Nov;196:105596.
PMID: 32580054 DOI: 10.1016/j.cmpb.2020.105596

Abstract

BACKGROUND AND OBJECTIVES: Continuous monitoring of physiological parameters such as photoplethysmography (PPG) has attracted increased interest due to advances in wearable sensors. However, PPG recordings are susceptible to various artifacts, and thus reducing the reliability of PPG-driven parameters, such as oxygen saturation, heart rate, blood pressure and respiration. This paper proposes a one-dimensional convolution neural network (1-D-CNN) to classify five-second PPG segments into clean or artifact-affected segments, avoiding data-dependent pulse segmentation techniques and heavy manual feature engineering.

METHODS: Continuous raw PPG waveforms were blindly allocated into segments with an equal length (5s) without leveraging any pulse location information and were normalized with Z-score normalization methods. A 1-D-CNN was designed to automatically learn the intrinsic features of the PPG waveform, and perform the required classification. Several training hyperparameters (initial learning rate and gradient threshold) were varied to investigate the effect of these parameters on the performance of the network. Subsequently, this proposed network was trained and validated with 30 subjects, and then tested with eight subjects, with our local dataset. Moreover, two independent datasets downloaded from the PhysioNet MIMIC II database were used to evaluate the robustness of the proposed network.

RESULTS: A 13 layer 1-D-CNN model was designed. Within our local study dataset evaluation, the proposed network achieved a testing accuracy of 94.9%. The classification accuracy of two independent datasets also achieved satisfactory accuracy of 93.8% and 86.7% respectively. Our model achieved a comparable performance with most reported works, with the potential to show good generalization as the proposed network was evaluated with multiple cohorts (overall accuracy of 94.5%).

CONCLUSION: This paper demonstrated the feasibility and effectiveness of applying blind signal processing and deep learning techniques to PPG motion artifact detection, whereby manual feature thresholding was avoided and yet a high generalization ability was achieved.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.