Displaying all 4 publications

Abstract:
Sort:
  1. Karnati M, Seal A, Sahu G, Yazidi A, Krejcar O
    Appl Soft Comput, 2022 Aug;125:109109.
    PMID: 35693544 DOI: 10.1016/j.asoc.2022.109109
    The COVID-19 pandemic has posed an unprecedented threat to the global public health system, primarily infecting the airway epithelial cells in the respiratory tract. Chest X-ray (CXR) is widely available, faster, and less expensive therefore it is preferred to monitor the lungs for COVID-19 diagnosis over other techniques such as molecular test, antigen test, antibody test, and chest computed tomography (CT). As the pandemic continues to reveal the limitations of our current ecosystems, researchers are coming together to share their knowledge and experience in order to develop new systems to tackle it. In this work, an end-to-end IoT infrastructure is designed and built to diagnose patients remotely in the case of a pandemic, limiting COVID-19 dissemination while also improving measurement science. The proposed framework comprises six steps. In the last step, a model is designed to interpret CXR images and intelligently measure the severity of COVID-19 lung infections using a novel deep neural network (DNN). The proposed DNN employs multi-scale sampling filters to extract reliable and noise-invariant features from a variety of image patches. Experiments are conducted on five publicly available databases, including COVIDx, COVID-19 Radiography, COVID-XRay-5K, COVID-19-CXR, and COVIDchestxray, with classification accuracies of 96.01%, 99.62%, 99.22%, 98.83%, and 100%, and testing times of 0.541, 0.692, 1.28, 0.461, and 0.202 s, respectively. The obtained results show that the proposed model surpasses fourteen baseline techniques. As a result, the newly developed model could be utilized to evaluate treatment efficacy, particularly in remote locations.
  2. Jain S, Seal A, Ojha A, Krejcar O, Bureš J, Tachecí I, et al.
    Comput Biol Med, 2020 12;127:104094.
    PMID: 33152668 DOI: 10.1016/j.compbiomed.2020.104094
    One of the most recent non-invasive technologies to examine the gastrointestinal tract is wireless capsule endoscopy (WCE). As there are thousands of endoscopic images in an 8-15 h long video, an evaluator has to pay constant attention for a relatively long time (60-120 min). Therefore the possibility of the presence of pathological findings in a few images (displayed for evaluation for a few seconds only) brings a significant risk of missing the pathology with all negative consequences for the patient. Hence, manually reviewing a video to identify abnormal images is not only a tedious and time consuming task that overwhelms human attention but also is error prone. In this paper, a method is proposed for the automatic detection of abnormal WCE images. The differential box counting method is used for the extraction of fractal dimension (FD) of WCE images and the random forest based ensemble classifier is used for the identification of abnormal frames. The FD is a well-known technique for extraction of features related to texture, smoothness, and roughness. In this paper, FDs are extracted from pixel-blocks of WCE images and are fed to the classifier for identification of images with abnormalities. To determine a suitable pixel block size for FD feature extraction, various sizes of blocks are considered and are fed into six frequently used classifiers separately, and the block size of 7×7 giving the best performance is empirically determined. Further, the selection of the random forest ensemble classifier is also done using the same empirical study. Performance of the proposed method is evaluated on two datasets containing WCE frames. Results demonstrate that the proposed method outperforms some of the state-of-the-art methods with AUC of 85% and 99% on Dataset-I and Dataset-II respectively.
  3. Seal A, Reddy PPN, Chaithanya P, Meghana A, Jahnavi K, Krejcar O, et al.
    Comput Math Methods Med, 2020;2020:8303465.
    PMID: 32831902 DOI: 10.1155/2020/8303465
    Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.
  4. Jain S, Seal A, Ojha A, Yazidi A, Bures J, Tacheci I, et al.
    Comput Biol Med, 2021 10;137:104789.
    PMID: 34455302 DOI: 10.1016/j.compbiomed.2021.104789
    Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links