Displaying all 5 publications

Abstract:
Sort:
  1. Podder KK, Chowdhury MEH, Tahir AM, Mahbub ZB, Khandakar A, Hossain MS, et al.
    Sensors (Basel), 2022 Jan 12;22(2).
    PMID: 35062533 DOI: 10.3390/s22020574
    A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.
  2. Tahir AM, Chowdhury MEH, Khandakar A, Al-Hamouz S, Abdalla M, Awadallah S, et al.
    Sensors (Basel), 2020 Feb 11;20(4).
    PMID: 32053914 DOI: 10.3390/s20040957
    Gait analysis is a systematic study of human locomotion, which can be utilized in variousapplications, such as rehabilitation, clinical diagnostics and sports activities. The various limitationssuch as cost, non-portability, long setup time, post-processing time etc., of the current gait analysistechniques have made them unfeasible for individual use. This led to an increase in research interestin developing smart insoles where wearable sensors can be employed to detect vertical groundreaction forces (vGRF) and other gait variables. Smart insoles are flexible, portable and comfortablefor gait analysis, and can monitor plantar pressure frequently through embedded sensors thatconvert the applied pressure to an electrical signal that can be displayed and analyzed further.Several research teams are still working to improve the insoles' features such as size, sensitivity ofinsoles sensors, durability, and the intelligence of insoles to monitor and control subjects' gait bydetecting various complications providing recommendation to enhance walking performance. Eventhough systematic sensor calibration approaches have been followed by different teams to calibrateinsoles' sensor, expensive calibration devices were used for calibration such as universal testingmachines or infrared motion capture cameras equipped in motion analysis labs. This paper providesa systematic design and characterization procedure for three different pressure sensors: forcesensitiveresistors (FSRs), ceramic piezoelectric sensors, and flexible piezoelectric sensors that canbe used for detecting vGRF using a smart insole. A simple calibration method based on a load cellis presented as an alternative to the expensive calibration techniques. In addition, to evaluate theperformance of the different sensors as a component for the smart insole, the acquired vGRF fromdifferent insoles were used to compare them. The results showed that the FSR is the most effectivesensor among the three sensors for smart insole applications, whereas the piezoelectric sensors canbe utilized in detecting the start and end of the gait cycle. This study will be useful for any researchgroup in replicating the design of a customized smart insole for gait analysis.
  3. Rahman A, Chowdhury MEH, Khandakar A, Tahir AM, Ibtehaz N, Hossain MS, et al.
    Comput Biol Med, 2022 Mar;142:105238.
    PMID: 35077938 DOI: 10.1016/j.compbiomed.2022.105238
    Harnessing the inherent anti-spoofing quality from electroencephalogram (EEG) signals has become a potential field of research in recent years. Although several studies have been conducted, still there are some vital challenges present in the deployment of EEG-based biometrics, which is stable and capable of handling the real-world scenario. One of the key challenges is the large signal variability of EEG when recorded on different days or sessions which impedes the performance of biometric systems significantly. To address this issue, a session invariant multimodal Self-organized Operational Neural Network (Self-ONN) based ensemble model combining EEG and keystroke dynamics is proposed in this paper. Our model is tested successfully on a large number of sessions (10 recording days) with many challenging noisy and variable environments for the identification and authentication tasks. In most of the previous studies, training and testing were performed either over a single recording session (same day) only or without ensuring appropriate splitting of the data on multiple recording days. Unlike those studies, in our work, we have rigorously split the data so that train and test sets do not share the data of the same recording day. The proposed multimodal Self-ONN based ensemble model has achieved identification accuracy of 98% in rigorous validation cases and outperformed the equivalent ensemble of deep CNN models. A novel Self-ONN Siamese network has also been proposed to measure the similarity of templates during the authentication task instead of the commonly used simple distance measure techniques. The multimodal Siamese network reduces the Equal Error Rate (EER) to 1.56% in rigorous authentication. The obtained results indicate that the proposed multimodal Self-ONN model can automatically extract session invariant unique non-linear features to identify and authenticate users with high accuracy.
  4. Tahir AM, Qiblawey Y, Khandakar A, Rahman T, Khurshid U, Musharavati F, et al.
    Cognit Comput, 2022;14(5):1752-1772.
    PMID: 35035591 DOI: 10.1007/s12559-021-09955-1
    Novel coronavirus disease (COVID-19) is an extremely contagious and quickly spreading coronavirus infestation. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), which outbreak in 2002 and 2011, and the current COVID-19 pandemic are all from the same family of coronavirus. This work aims to classify COVID-19, SARS, and MERS chest X-ray (CXR) images using deep convolutional neural networks (CNNs). To the best of our knowledge, this classification scheme has never been investigated in the literature. A unique database was created, so-called QU-COVID-family, consisting of 423 COVID-19, 144 MERS, and 134 SARS CXR images. Besides, a robust COVID-19 recognition system was proposed to identify lung regions using a CNN segmentation model (U-Net), and then classify the segmented lung images as COVID-19, MERS, or SARS using a pre-trained CNN classifier. Furthermore, the Score-CAM visualization method was utilized to visualize classification output and understand the reasoning behind the decision of deep CNNs. Several deep learning classifiers were trained and tested; four outperforming algorithms were reported: SqueezeNet, ResNet18, InceptionV3, and DenseNet201. Original and preprocessed images were used individually and all together as the input(s) to the networks. Two recognition schemes were considered: plain CXR classification and segmented CXR classification. For plain CXRs, it was observed that InceptionV3 outperforms other networks with a 3-channel scheme and achieves sensitivities of 99.5%, 93.1%, and 97% for classifying COVID-19, MERS, and SARS images, respectively. In contrast, for segmented CXRs, InceptionV3 outperformed using the original CXR dataset and achieved sensitivities of 96.94%, 79.68%, and 90.26% for classifying COVID-19, MERS, and SARS images, respectively. The classification performance degrades with segmented CXRs compared to plain CXRs. However, the results are more reliable as the network learns from the main region of interest, avoiding irrelevant non-lung areas (heart, bones, or text), which was confirmed by the Score-CAM visualization. All networks showed high COVID-19 detection sensitivity (> 96%) with the segmented lung images. This indicates the unique radiographic signature of COVID-19 cases in the eyes of AI, which is often a challenging task for medical doctors.
  5. Mahmud S, Ibtehaz N, Khandakar A, Tahir AM, Rahman T, Islam KR, et al.
    Sensors (Basel), 2022 Jan 25;22(3).
    PMID: 35161664 DOI: 10.3390/s22030919
    Cardiovascular diseases are the most common causes of death around the world. To detect and treat heart-related diseases, continuous blood pressure (BP) monitoring along with many other parameters are required. Several invasive and non-invasive methods have been developed for this purpose. Most existing methods used in hospitals for continuous monitoring of BP are invasive. On the contrary, cuff-based BP monitoring methods, which can predict systolic blood pressure (SBP) and diastolic blood pressure (DBP), cannot be used for continuous monitoring. Several studies attempted to predict BP from non-invasively collectible signals such as photoplethysmograms (PPG) and electrocardiograms (ECG), which can be used for continuous monitoring. In this study, we explored the applicability of autoencoders in predicting BP from PPG and ECG signals. The investigation was carried out on 12,000 instances of 942 patients of the MIMIC-II dataset, and it was found that a very shallow, one-dimensional autoencoder can extract the relevant features to predict the SBP and DBP with state-of-the-art performance on a very large dataset. An independent test set from a portion of the MIMIC-II dataset provided a mean absolute error (MAE) of 2.333 and 0.713 for SBP and DBP, respectively. On an external dataset of 40 subjects, the model trained on the MIMIC-II dataset provided an MAE of 2.728 and 1.166 for SBP and DBP, respectively. For both the cases, the results met British Hypertension Society (BHS) Grade A and surpassed the studies from the current literature.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links