METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.
RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.
CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.
METHODS: Non-linear autoregressive (NARX) model is used to reconstruct missing airway pressure due to the presence of spontaneous breathing effort in mv patients. Then, the incidence of SB patients is estimated. The study uses a total of 10,000 breathing cycles collected from 10 ARDS patients from IIUM Hospital in Kuantan, Malaysia. In this study, there are 2 different ratios of training and validating methods. Firstly, the initial ratio used is 60:40 which indicates 600 breath cycles for training and remaining 400 breath cycles used for testing. Then, the ratio is varied using 70:30 ratio for training and testing data.
RESULTS AND DISCUSSION: The mean residual error between original airway pressure and reconstructed airway pressure is denoted as the magnitude of effort. The median and interquartile range of mean residual error for both ratio are 0.0557 [0.0230 - 0.0874] and 0.0534 [0.0219 - 0.0870] respectively for all patients. The results also show that Patient 2 has the highest percentage of SB incidence and Patient 10 with the lowest percentage of SB incidence which proved that NARX model is able to perform for both higher incidence of SB effort or when there is a lack of SB effort.
CONCLUSION: This model is able to produce the SB incidence rate based on 10% threshold. Hence, the proposed NARX model is potentially useful to estimate and identify patient-specific SB effort, which has the potential to further assist clinical decisions and optimize MV settings.
METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.
RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.
CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.
METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.
RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.
CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.
METHODS: The pterygium screening system was tested on two normal eye databases (UBIRIS and MILES) and two pterygium databases (Australia Pterygium and Brazil Pterygium). This system comprises four modules: (i) a preprocessing module to enhance the pterygium tissue using HSV-Sigmoid; (ii) a segmentation module to differentiate the corneal region and the pterygium tissue; (iii) a feature extraction module to extract corneal features using circularity ratio, Haralick's circularity, eccentricity, and solidity; and (iv) a classification module to identify the presence or absence of pterygium. System performance was evaluated using support vector machine (SVM) and artificial neural network.
RESULTS: The three-step frame differencing technique was introduced in the corneal segmentation module. The output image successfully covered the region of interest with an average accuracy of 0.9127. The performance of the proposed system using SVM provided the most promising results of 88.7%, 88.3%, and 95.6% for sensitivity, specificity, and area under the curve, respectively.
CONCLUSION: A basic platform for computer-aided pterygium screening was successfully developed using the proposed modules. The proposed system can classify pterygium and non-pterygium cases reasonably well. In our future work, a standard grading system will be developed to identify the severity of pterygium cases. This system is expected to increase the awareness of communities in rural areas on pterygium.
BACKGROUND AND OBJECTIVE: Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement.
METHODS: An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis.
RESULTS: A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.
CONCLUSIONS: The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide.
METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.
RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.
CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.
METHODS: This paper presents two hybrid methodologies that combines optimal control theory with multi-objective swarm and evolutionary algorithms and compares the performance of these methodologies with multi-objective swarm intelligence algorithms such as MOEAD, MODE, MOPSO and M-MOPSO. The hybrid and conventional methodologies are compared by addressing CMOOP.
RESULTS: The minimized tumor and drug concentration results obtained by the hybrid methodologies demonstrate that they are not only superior to pure swarm intelligence or evolutionary algorithm methodologies but also consumes far less computational time. Further, Second Order Sufficient Condition (SSC) is also used to verify and validate the optimality condition of the constrained multi-objective problem.
CONCLUSION: The proposed methodologies reduce chemo-medicine administration while maintaining effective tumor killing. This will be helpful for oncologist to discover and find the optimum dose schedule of the chemotherapy that reduces the tumor cells while maintaining the patients' health at a safe level.
METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.
RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.
CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.
OBJECTIVES: This study aimed to segment the breath cycles from pulmonary acoustic signals using the newly developed adaptive neuro-fuzzy inference system (ANFIS) based on breath phase detection and to subsequently evaluate the performance of the system.
METHODS: The normalised averaged power spectral density for each segment was fuzzified, and a set of fuzzy rules was formulated. The ANFIS was developed to detect the breath phases and subsequently perform breath cycle segmentation. To evaluate the performance of the proposed method, the root mean square error (RMSE) and correlation coefficient values were calculated and analysed, and the proposed method was then validated using data collected at KIMS Hospital and the RALE standard dataset.
RESULTS: The analysis of the correlation coefficient of the neuro-fuzzy model, which was performed to evaluate its performance, revealed a correlation strength of r = 0.9925, and the RMSE for the neuro-fuzzy model was found to equal 0.0069.
CONCLUSION: The proposed neuro-fuzzy model performs better than the fuzzy inference system (FIS) in detecting the breath phases and segmenting the breath cycles and requires less rules than FIS.
METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.
RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.
CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.
METHOD: For this purpose, we employ fractal theory and analyze the variations of fractal dimension of GSR and EEG signals when subjects are exposed to different olfactory stimuli in the form of pleasant odors.
RESULTS: Based on the obtained results, the complexity of GSR signal changes with the complexity of EEG signal in case of different stimuli, where by increasing the molecular complexity of olfactory stimuli, the complexity of EEG and GSR signals increases. The results of statistical analysis showed the significant effect of stimulation on variations of complexity of GSR signal. In addition, based on effect size analysis, fourth odor with greatest molecular complexity had the greatest effect on variations of complexity of EEG and GSR signals.
CONCLUSION: Therefore, it can be said that human skin reaction changes with the variations in the activity of human brain. The result of analysis in this research can be further used to make a model between the activities of human skin and brain that will enable us to predict skin reaction to different stimuli.
METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.
RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.
CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.
OBJECTIVE: This paper proposes a novel technique for reorganisation of opinion order to interval levels (TROOIL) to prioritise the patients with MCDs in real-time remote health-monitoring system.
METHODS: The proposed TROOIL technique comprises six steps for prioritisation of patients with MCDs: (1) conversion of actual data into intervals; (2) rule generation; (3) rule ordering; (4) expert rule validation; (5) data reorganisation; and (6) criteria weighting and ranking alternatives within each rule. The secondary dataset of 500 patients from the most relevant study in a remote prioritisation area was adopted. The dataset contains three diseases, namely, chronic heart disease, high blood pressure (BP) and low BP.
RESULTS: The proposed TROOIL is an effective technique for prioritising patients with MCDs. In the objective validation, remarkable differences were recognised among the groups' scores, indicating identical ranking results. In the evaluation of issues within all scenarios, the proposed framework has an advantage of 22.95% over the benchmark framework.
DISCUSSION: Patients with the most severe MCD were treated first on the basis of their highest priority levels. The treatment for patients with less severe cases was delayed more than that for other patients.
CONCLUSIONS: The proposed TROOIL technique can deal with multiple DM problems in prioritisation of patients with MCDs.