METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.
RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.
CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.
METHODS: To verify this hypothesis, a computational model was developed to simulate the thermochemical processes involved during TCA with sequential injection. Four major processes that take place during TCA were considered, i.e., the flow of acid and base, their neutralisation, the release of exothermic heat and the formation of thermal damage inside the tissue. Equimolar acid and base at 7.5 M was injected into the tissue intermittently. Six injection intervals, namely 3, 6, 15, 20, 30 and 60 s were investigated.
RESULTS: Shortening of the injection interval led to the enlargement of coagulation volume. If one considers only the coagulation volume as the determining factor, then a 15 s injection interval was found to be optimum. Conversely, if one places priority on safety, then a 3 s injection interval would result in the lowest amount of reagent residue inside the tissue after treatment. With a 3 s injection interval, the coagulation volume was found to be larger than that of simultaneous injection with the same treatment parameters. Not only that, the volume also surpassed that of radiofrequency ablation (RFA); a conventional thermal ablation technique commonly used for liver cancer treatment.
CONCLUSION: The numerical results verified the hypothesis that shortening the injection interval will lead to the formation of larger thermal coagulation zone during TCA with sequential injection. More importantly, a 3 s injection interval was found to be optimum for both efficacy (large coagulation volume) and safety (least amount of reagent residue).
METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.
RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.
CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.
METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.
RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.
CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.
METHODS: The dataset used in this study consist of ECG data collected from 45 ADHD, 62 ADHD+CD, and 16 CD patients at the Child Guidance Clinic in Singapore. The ECG data were segmented into 2 s epochs and directly used to train our 1-dimensional (1D) convolutional neural network (CNN) model.
RESULTS: The proposed model yielded 96.04% classification accuracy, 96.26% precision, 95.99% sensitivity, and 96.11% F1-score. The Gradient-weighted class activation mapping (Grad-CAM) function was also used to highlight the important ECG characteristics at specific time points that most impact the classification score.
CONCLUSION: In addition to achieving model performance results with our suggested DL method, Grad-CAM's implementation also offers vital temporal data that clinicians and other mental healthcare professionals can use to make wise medical judgments. We hope that by conducting this pilot study, we will be able to encourage larger-scale research with a larger biosignal dataset. Hence allowing biosignal-based computer-aided diagnostic (CAD) tools to be implemented in healthcare and ambulatory settings, as ECG can be easily obtained via wearable devices such as smartwatches.
METHODS: A stochastic model was developed using respiratory elastance (Ers) data from two clinical cohorts and averaged over 30-minute time intervals. The stochastic model was used to generate future Ers data based on current Ers values with added normally distributed random noise. Self-validation of the VPs was performed via Monte Carlo simulation and retrospective Ers profile fitting. A stochastic VP cohort of temporal Ers evolution was synthesised and then compared to an independent retrospective patient cohort data in a virtual trial across several measured patient responses, where similarity of profiles validates the realism of stochastic model generated VP profiles.
RESULTS: A total of 120,000 3-hour VPs for pressure control (PC) and volume control (VC) ventilation modes are generated using stochastic simulation. Optimisation of the stochastic simulation process yields an ideal noise percentage of 5-10% and simulation iteration of 200,000 iterations, allowing the simulation of a realistic and diverse set of Ers profiles. Results of self-validation show the retrospective Ers profiles were able to be recreated accurately with a mean squared error of only 0.099 [0.009-0.790]% for the PC cohort and 0.051 [0.030-0.126]% for the VC cohort. A virtual trial demonstrates the ability of the stochastic VP cohort to capture Ers trends within and beyond the retrospective patient cohort providing cohort-level validation.
CONCLUSION: VPs capable of temporal evolution demonstrate feasibility for use in designing, developing, and optimising bedside MV guidance protocols through in-silico simulation and validation. Overall, the temporal VPs developed using stochastic simulation alleviate the need for lengthy, resource intensive, high cost clinical trials, while facilitating statistically robust virtual trials, ultimately leading to improved patient care and outcomes in mechanical ventilation.
METHODS: The investigated dataset was obtained via long-term measurements in retirement homes and intensive care units (ICU). Data were measured unobtrusively using a measuring pad equipped with piezoceramic sensors. The proposed approach focused on the processing methods of the measured ballistocardiographic signals, Cartan curvature (CC), and Euclidean arc length (EAL).
RESULTS: For analysis, 218,979 normal and 216,259 aberrant 2-second samples were collected and classified using a convolutional neural network. Experiments using cross-validation with expert threshold and data length revealed the accuracy, sensitivity, and specificity of the proposed method to be 86.51 CONCLUSIONS: The proposed method provides a unique approach for an early detection of health concerns in an unobtrusive manner. In addition, the suitability of EAL over the CC was determined.
METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.
RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.
CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.
METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.
RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.
CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.
METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.
RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.
CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.
METHODOLOGY: The present study investigates the effects of heterogeneous GNR distribution in a typical setup of GNR-PTT. Three cases were considered. Case 1 considered the GNRs at the tumour centre, while Case 2 represents a hypothetical scenario where GNRs are distributed at the tumour periphery; these two cases represent intratumoural accumulation with different degree of GNR spread inside the tumour. Case 3 is achieved when GNRs target the exposed tumoural surface that is invading the bladder wall, when they are delivered by intravesical instillation.
RESULTS: Results indicate that for a laser power of 0.6 W and GNR volume fraction of 0.01%, Case 2 and 3 were successful in achieving complete tumour eradication after 330 and 470 s of laser irradiation, respectively. Case 1 failed to form complete tumour damage when the GNRs are concentrated at the tumour centre but managed to produce complete tumour damage if the spread of GNRs is wider. Results from Case 2 also demonstrated a different heating profile from Case 1, suggesting that thermal ablation during GNR-PTT is dependant on the GNRs distribution inside the tumour. Case 3 shows similar results to Case 2 whereby gradual but uniform heating is observed. Cases 2 and 3 show that uniformly heating the tumour can reduce damage to the surrounding tissues.
CONCLUSIONS: Different GNR distribution associated with the different methods of introducing GNRs to the bladder during GNR-PTT affect the treatment outcome of bladder cancer in mice. Insufficient spreading during intratumoural injection of GNRs can render the treatment ineffective, while administered via intravesical instillation. GNR distribution achieved through intravesical instillation present some advantages over intratumoural injection and is worthy of further exploration.
METHODS: The methodology is built-in deep data analysis for normalization. In comparison to previous research, the system does not necessitate a feature extraction process that optimizes and reduces system complexity. The data classification is provided by a designed 8-layer deep convolutional neural network.
RESULTS: Depending on used data, we have achieved the accuracy, specificity, and sensitivity of 98%, 98%, and 98.5% on the short-term Bonn EEG dataset, and 96.99%, 96.89%, and 97.06% on the long-term CHB-MIT EEG dataset.
CONCLUSIONS: Through the approach to detection, the system offers an optimized solution for seizure diagnosis health problems. The proposed solution should be implemented in all clinical or home environments for decision support.
METHODS: In this study, the drawbacks of DTF and PDC are addressed by proposing a novel technique, termed as Efficient Effective Connectivity (EEC), for the estimation of EC between multivariate sources using AR spectral estimation and Granger causality principle. In EEC, a linear predictive filter with AR coefficients obtained via multivariate EEG is used for signal prediction. This leads to the estimation of full-length signals which are then transformed into frequency domain by using Burg spectral estimation method. Furthermore, the newly proposed normalization method addressed the effect on each source in EEC using the sum of maximum connectivity values over the entire frequency range. Lastly, the proposed dynamic thresholding works by subtracting the first moment of causal effects of all the sources on one source from individual connections present for that source.
RESULTS: The proposed method is evaluated using synthetic and real resting-state EEG of 46 healthy controls. A 3D-Convolutional Neural Network is trained and tested using the PDC and EEC samples. The result indicates that compared to PDC, EEC improves the EEG eye-state classification accuracy, sensitivity and specificity by 5.57%, 3.15% and 8.74%, respectively.
CONCLUSION: Correct identification of all connections in synthetic data and improved resting-state classification performance using EEC proved that EEC gives better estimation of directed causality and indicates that it can be used for reliable understanding of brain mechanisms. Conclusively, the proposed technique may open up new research dimensions for clinical diagnosis of mental disorders.
METHODS: The aforesaid computational TCA framework for sequential injection was applied and adapted to simulate TCA with simultaneous injection of acid and base at equimolar and equivolume. The developed framework, which describes the flow of acid and base, their neutralisation, the rise in tissue temperature and the formation of thermal damage, was solved numerically using the finite element method. The framework will be used to investigate the effects of injection rate, reagent concentration, volume and type (weak/strong acid-base combination) on temperature rise and thermal coagulation formation.
RESULTS: A higher injection rate resulted in higher temperature rise and larger thermal coagulation. Reagent concentration of 7500 mol/m3 was found to be optimum in producing considerable thermal coagulation without the risk of tissue overheating. Thermal coagulation volume was found to be consistently larger than the total volume of acid and base injected into the tissue, which is beneficial as it reduces the risk of chemical burn injury. Three multivariate second-order polynomials that express the targeted coagulation volume as functions of injection rate and reagent volume, for the weak-weak, weak-strong and strong-strong acid-base combinations were also derived based on the simulated data.
CONCLUSIONS: A guideline for a safe and effective implementation of TCA with simultaneous injection of acid and base was recommended based on the numerical results of the computational model developed. The guideline correlates the coagulation volume with the reagent volume and injection rate, and may be used by clinicians in determining the safe dosage of reagents and optimum injection rate to achieve a desired thermal coagulation volume during TCA.
METHODS: Non-linear autoregressive (NARX) model is used to reconstruct missing airway pressure due to the presence of spontaneous breathing effort in mv patients. Then, the incidence of SB patients is estimated. The study uses a total of 10,000 breathing cycles collected from 10 ARDS patients from IIUM Hospital in Kuantan, Malaysia. In this study, there are 2 different ratios of training and validating methods. Firstly, the initial ratio used is 60:40 which indicates 600 breath cycles for training and remaining 400 breath cycles used for testing. Then, the ratio is varied using 70:30 ratio for training and testing data.
RESULTS AND DISCUSSION: The mean residual error between original airway pressure and reconstructed airway pressure is denoted as the magnitude of effort. The median and interquartile range of mean residual error for both ratio are 0.0557 [0.0230 - 0.0874] and 0.0534 [0.0219 - 0.0870] respectively for all patients. The results also show that Patient 2 has the highest percentage of SB incidence and Patient 10 with the lowest percentage of SB incidence which proved that NARX model is able to perform for both higher incidence of SB effort or when there is a lack of SB effort.
CONCLUSION: This model is able to produce the SB incidence rate based on 10% threshold. Hence, the proposed NARX model is potentially useful to estimate and identify patient-specific SB effort, which has the potential to further assist clinical decisions and optimize MV settings.
METHOD: The CAE model was trained using 12,170,655 simulated SB flow and normal flow data (NB). The paired SB and NB flow data were simulated using a Gaussian Effort Model (GEM) with 5 basis functions. When the CAE model is given a SB flow input, it is capable of predicting a corresponding NB flow for the SB flow input. The magnitude of SB effort (SBEMag) is then quantified as the difference between the SB and NB flows. The CAE model was used to evaluate the SBEMag of 9 pressure control/ support datasets. Results were validated using a mean squared error (MSE) fitting between clinical and training SB flows.
RESULTS: The CAE model was able to produce NB flows from the clinical SB flows with the median SBEMag of the 9 datasets being 25.39% [IQR: 21.87-25.57%]. The absolute error in SBEMag using MSE validation yields a median of 4.77% [IQR: 3.77-8.56%] amongst the cohort. This shows the ability of the GEM to capture the intrinsic details present in SB flow waveforms. Analysis also shows both intra-patient and inter-patient variability in SBEMag.
CONCLUSION: A Convolutional Autoencoder model was developed with simulated SB and NB flow data and is capable of quantifying the magnitude of patient spontaneous breathing effort. This provides potential application for real-time monitoring of patient respiratory drive for better management of patient-ventilator interaction.
METHODS: Computational Fluid Dynamics (CFD) approach is used to simulate the airflow in a neonate, an infant and an adult in sedentary breathing conditions. The healthy CT scans are segmented using MIMICS 21.0 (Materialise, Ann arbor, MI). The patient-specific 3D airway models are analyzed for low Reynolds number flow using ANSYS FLUENT 2020 R2. The applicability of the Grid Convergence Index (GCI) for polyhedral mesh adopted in this work is also verified.
RESULTS: This study shows that the inferior meatus of neonates accounted for only 15% of the total airflow. This was in contrast to the infants and adults who experienced 49 and 31% of airflow at the inferior meatus region. Superior meatus experienced 25% of total flow which is more than normal for the neonate. The highest velocity of 1.8, 2.6 and 3.7 m/s was observed at the nasal valve region for neonates, infants and adults, respectively. The anterior portion of the nasal cavity experienced maximum wall shear stress with average values of 0.48, 0.25 and 0.58 Pa for the neonates, infants and adults.
CONCLUSIONS: The neonates have an underdeveloped nasal cavity which significantly affects their airway distribution. The absence of inferior meatus in the neonates has limited the flow through the inferior regions and resulted in uneven flow distribution.