METHODS: The pterygium screening system was tested on two normal eye databases (UBIRIS and MILES) and two pterygium databases (Australia Pterygium and Brazil Pterygium). This system comprises four modules: (i) a preprocessing module to enhance the pterygium tissue using HSV-Sigmoid; (ii) a segmentation module to differentiate the corneal region and the pterygium tissue; (iii) a feature extraction module to extract corneal features using circularity ratio, Haralick's circularity, eccentricity, and solidity; and (iv) a classification module to identify the presence or absence of pterygium. System performance was evaluated using support vector machine (SVM) and artificial neural network.
RESULTS: The three-step frame differencing technique was introduced in the corneal segmentation module. The output image successfully covered the region of interest with an average accuracy of 0.9127. The performance of the proposed system using SVM provided the most promising results of 88.7%, 88.3%, and 95.6% for sensitivity, specificity, and area under the curve, respectively.
CONCLUSION: A basic platform for computer-aided pterygium screening was successfully developed using the proposed modules. The proposed system can classify pterygium and non-pterygium cases reasonably well. In our future work, a standard grading system will be developed to identify the severity of pterygium cases. This system is expected to increase the awareness of communities in rural areas on pterygium.
METHODS: This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise.
RESULTS: Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms.
CONCLUSION: This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters.
METHODS: An iterative airway pressure reconstruction (IPR) method is used to reconstruct asynchronous airway pressure waveforms to better match passive breathing airway waveforms using a single compartment model. The reconstructed pressure enables estimation of respiratory mechanics of airway pressure waveform essentially free from asynchrony. Reconstruction enables real-time breath-to-breath monitoring and quantification of the magnitude of the asynchrony (MAsyn).
RESULTS AND DISCUSSION: Over 100,000 breathing cycles from MV patients with known asynchronous breathing were analyzed. The IPR was able to reconstruct different types of asynchronous breathing. The resulting respiratory mechanics estimated using pressure reconstruction were more consistent with smaller interquartile range (IQR) compared to respiratory mechanics estimated using asynchronous pressure. Comparing reconstructed pressure with asynchronous pressure waveforms quantifies the magnitude of asynchronous breathing, which has a median value MAsyn for the entire dataset of 3.8%.
CONCLUSION: The iterative pressure reconstruction method is capable of identifying asynchronous breaths and improving respiratory mechanics estimation consistency compared to conventional model-based methods. It provides an opportunity to automate real-time quantification of asynchronous breathing frequency and magnitude that was previously limited to invasively method only.
OBJECTIVES: This study aimed to segment the breath cycles from pulmonary acoustic signals using the newly developed adaptive neuro-fuzzy inference system (ANFIS) based on breath phase detection and to subsequently evaluate the performance of the system.
METHODS: The normalised averaged power spectral density for each segment was fuzzified, and a set of fuzzy rules was formulated. The ANFIS was developed to detect the breath phases and subsequently perform breath cycle segmentation. To evaluate the performance of the proposed method, the root mean square error (RMSE) and correlation coefficient values were calculated and analysed, and the proposed method was then validated using data collected at KIMS Hospital and the RALE standard dataset.
RESULTS: The analysis of the correlation coefficient of the neuro-fuzzy model, which was performed to evaluate its performance, revealed a correlation strength of r = 0.9925, and the RMSE for the neuro-fuzzy model was found to equal 0.0069.
CONCLUSION: The proposed neuro-fuzzy model performs better than the fuzzy inference system (FIS) in detecting the breath phases and segmenting the breath cycles and requires less rules than FIS.
METHODS: Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well.
RESULTS: Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%.
CONCLUSION: The experimental results indicated that the proposed combination of feature extraction and selection method offers suitable classification accuracy and may be employed to detect the subtle changes in the cry signals.
BACKGROUND AND OBJECTIVE: Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement.
METHODS: An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis.
RESULTS: A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.
CONCLUSIONS: The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide.
OBJECTIVE: This study aimed to review and analyse literature related to the detection and classification of acute leukaemia. The factors that were considered to improve understanding on the field's various contextual aspects in published studies and characteristics were motivation, open challenges that confronted researchers and recommendations presented to researchers to enhance this vital research area.
METHODS: We systematically searched all articles about the classification and detection of acute leukaemia, as well as their evaluation and benchmarking, in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 2007 to 2017. These indices were considered to be sufficiently extensive to encompass our field of literature.
RESULTS: Based on our inclusion and exclusion criteria, 89 articles were selected. Most studies (58/89) focused on the methods or algorithms of acute leukaemia classification, a number of papers (22/89) covered the developed systems for the detection or diagnosis of acute leukaemia and few papers (5/89) presented evaluation and comparative studies. The smallest portion (4/89) of articles comprised reviews and surveys.
DISCUSSION: Acute leukaemia diagnosis, which is a field requiring automated solutions, tools and methods, entails the ability to facilitate early detection or even prediction. Many studies have been performed on the automatic detection and classification of acute leukaemia and their subtypes to promote accurate diagnosis.
CONCLUSIONS: Research areas on medical-image classification vary, but they are all equally vital. We expect this systematic review to help emphasise current research opportunities and thus extend and create additional research fields.
METHODS: The aforesaid computational TCA framework for sequential injection was applied and adapted to simulate TCA with simultaneous injection of acid and base at equimolar and equivolume. The developed framework, which describes the flow of acid and base, their neutralisation, the rise in tissue temperature and the formation of thermal damage, was solved numerically using the finite element method. The framework will be used to investigate the effects of injection rate, reagent concentration, volume and type (weak/strong acid-base combination) on temperature rise and thermal coagulation formation.
RESULTS: A higher injection rate resulted in higher temperature rise and larger thermal coagulation. Reagent concentration of 7500 mol/m3 was found to be optimum in producing considerable thermal coagulation without the risk of tissue overheating. Thermal coagulation volume was found to be consistently larger than the total volume of acid and base injected into the tissue, which is beneficial as it reduces the risk of chemical burn injury. Three multivariate second-order polynomials that express the targeted coagulation volume as functions of injection rate and reagent volume, for the weak-weak, weak-strong and strong-strong acid-base combinations were also derived based on the simulated data.
CONCLUSIONS: A guideline for a safe and effective implementation of TCA with simultaneous injection of acid and base was recommended based on the numerical results of the computational model developed. The guideline correlates the coagulation volume with the reagent volume and injection rate, and may be used by clinicians in determining the safe dosage of reagents and optimum injection rate to achieve a desired thermal coagulation volume during TCA.
METHODS: In this study, the drawbacks of DTF and PDC are addressed by proposing a novel technique, termed as Efficient Effective Connectivity (EEC), for the estimation of EC between multivariate sources using AR spectral estimation and Granger causality principle. In EEC, a linear predictive filter with AR coefficients obtained via multivariate EEG is used for signal prediction. This leads to the estimation of full-length signals which are then transformed into frequency domain by using Burg spectral estimation method. Furthermore, the newly proposed normalization method addressed the effect on each source in EEC using the sum of maximum connectivity values over the entire frequency range. Lastly, the proposed dynamic thresholding works by subtracting the first moment of causal effects of all the sources on one source from individual connections present for that source.
RESULTS: The proposed method is evaluated using synthetic and real resting-state EEG of 46 healthy controls. A 3D-Convolutional Neural Network is trained and tested using the PDC and EEC samples. The result indicates that compared to PDC, EEC improves the EEG eye-state classification accuracy, sensitivity and specificity by 5.57%, 3.15% and 8.74%, respectively.
CONCLUSION: Correct identification of all connections in synthetic data and improved resting-state classification performance using EEC proved that EEC gives better estimation of directed causality and indicates that it can be used for reliable understanding of brain mechanisms. Conclusively, the proposed technique may open up new research dimensions for clinical diagnosis of mental disorders.
METHODOLOGY: The present study investigates the effects of heterogeneous GNR distribution in a typical setup of GNR-PTT. Three cases were considered. Case 1 considered the GNRs at the tumour centre, while Case 2 represents a hypothetical scenario where GNRs are distributed at the tumour periphery; these two cases represent intratumoural accumulation with different degree of GNR spread inside the tumour. Case 3 is achieved when GNRs target the exposed tumoural surface that is invading the bladder wall, when they are delivered by intravesical instillation.
RESULTS: Results indicate that for a laser power of 0.6 W and GNR volume fraction of 0.01%, Case 2 and 3 were successful in achieving complete tumour eradication after 330 and 470 s of laser irradiation, respectively. Case 1 failed to form complete tumour damage when the GNRs are concentrated at the tumour centre but managed to produce complete tumour damage if the spread of GNRs is wider. Results from Case 2 also demonstrated a different heating profile from Case 1, suggesting that thermal ablation during GNR-PTT is dependant on the GNRs distribution inside the tumour. Case 3 shows similar results to Case 2 whereby gradual but uniform heating is observed. Cases 2 and 3 show that uniformly heating the tumour can reduce damage to the surrounding tissues.
CONCLUSIONS: Different GNR distribution associated with the different methods of introducing GNRs to the bladder during GNR-PTT affect the treatment outcome of bladder cancer in mice. Insufficient spreading during intratumoural injection of GNRs can render the treatment ineffective, while administered via intravesical instillation. GNR distribution achieved through intravesical instillation present some advantages over intratumoural injection and is worthy of further exploration.
METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.
RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.
CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.
METHODS: The methodology is built-in deep data analysis for normalization. In comparison to previous research, the system does not necessitate a feature extraction process that optimizes and reduces system complexity. The data classification is provided by a designed 8-layer deep convolutional neural network.
RESULTS: Depending on used data, we have achieved the accuracy, specificity, and sensitivity of 98%, 98%, and 98.5% on the short-term Bonn EEG dataset, and 96.99%, 96.89%, and 97.06% on the long-term CHB-MIT EEG dataset.
CONCLUSIONS: Through the approach to detection, the system offers an optimized solution for seizure diagnosis health problems. The proposed solution should be implemented in all clinical or home environments for decision support.
METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.
RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.
CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.
METHODS: The investigated dataset was obtained via long-term measurements in retirement homes and intensive care units (ICU). Data were measured unobtrusively using a measuring pad equipped with piezoceramic sensors. The proposed approach focused on the processing methods of the measured ballistocardiographic signals, Cartan curvature (CC), and Euclidean arc length (EAL).
RESULTS: For analysis, 218,979 normal and 216,259 aberrant 2-second samples were collected and classified using a convolutional neural network. Experiments using cross-validation with expert threshold and data length revealed the accuracy, sensitivity, and specificity of the proposed method to be 86.51 CONCLUSIONS: The proposed method provides a unique approach for an early detection of health concerns in an unobtrusive manner. In addition, the suitability of EAL over the CC was determined.
METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.
RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.
CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.
METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.
RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.
CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.
METHODS: A stochastic model was developed using respiratory elastance (Ers) data from two clinical cohorts and averaged over 30-minute time intervals. The stochastic model was used to generate future Ers data based on current Ers values with added normally distributed random noise. Self-validation of the VPs was performed via Monte Carlo simulation and retrospective Ers profile fitting. A stochastic VP cohort of temporal Ers evolution was synthesised and then compared to an independent retrospective patient cohort data in a virtual trial across several measured patient responses, where similarity of profiles validates the realism of stochastic model generated VP profiles.
RESULTS: A total of 120,000 3-hour VPs for pressure control (PC) and volume control (VC) ventilation modes are generated using stochastic simulation. Optimisation of the stochastic simulation process yields an ideal noise percentage of 5-10% and simulation iteration of 200,000 iterations, allowing the simulation of a realistic and diverse set of Ers profiles. Results of self-validation show the retrospective Ers profiles were able to be recreated accurately with a mean squared error of only 0.099 [0.009-0.790]% for the PC cohort and 0.051 [0.030-0.126]% for the VC cohort. A virtual trial demonstrates the ability of the stochastic VP cohort to capture Ers trends within and beyond the retrospective patient cohort providing cohort-level validation.
CONCLUSION: VPs capable of temporal evolution demonstrate feasibility for use in designing, developing, and optimising bedside MV guidance protocols through in-silico simulation and validation. Overall, the temporal VPs developed using stochastic simulation alleviate the need for lengthy, resource intensive, high cost clinical trials, while facilitating statistically robust virtual trials, ultimately leading to improved patient care and outcomes in mechanical ventilation.
METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.
RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.
CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.