Displaying publications 61 - 80 of 87 in total

Abstract:
Sort:
  1. Boon KH, Khalil-Hani M, Malarvili MB
    Comput Methods Programs Biomed, 2018 Jan;153:171-184.
    PMID: 29157449 DOI: 10.1016/j.cmpb.2017.10.012
    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity.
  2. Wan Zaki WMD, Mat Daud M, Abdani SR, Hussain A, Mutalib HA
    Comput Methods Programs Biomed, 2018 Feb;154:71-78.
    PMID: 29249348 DOI: 10.1016/j.cmpb.2017.10.026
    BACKGROUND AND BJECTIVE: Pterygium is an ocular disease caused by fibrovascular tissue encroachment onto the corneal region. The tissue may cause vision blurring if it grows into the pupil region. In this study, we propose an automatic detection method to differentiate pterygium from non-pterygium (normal) cases on the basis of frontal eye photographed images, also known as anterior segment photographed images.

    METHODS: The pterygium screening system was tested on two normal eye databases (UBIRIS and MILES) and two pterygium databases (Australia Pterygium and Brazil Pterygium). This system comprises four modules: (i) a preprocessing module to enhance the pterygium tissue using HSV-Sigmoid; (ii) a segmentation module to differentiate the corneal region and the pterygium tissue; (iii) a feature extraction module to extract corneal features using circularity ratio, Haralick's circularity, eccentricity, and solidity; and (iv) a classification module to identify the presence or absence of pterygium. System performance was evaluated using support vector machine (SVM) and artificial neural network.

    RESULTS: The three-step frame differencing technique was introduced in the corneal segmentation module. The output image successfully covered the region of interest with an average accuracy of 0.9127. The performance of the proposed system using SVM provided the most promising results of 88.7%, 88.3%, and 95.6% for sensitivity, specificity, and area under the curve, respectively.

    CONCLUSION: A basic platform for computer-aided pterygium screening was successfully developed using the proposed modules. The proposed system can classify pterygium and non-pterygium cases reasonably well. In our future work, a standard grading system will be developed to identify the severity of pterygium cases. This system is expected to increase the awareness of communities in rural areas on pterygium.

  3. Ahmad M, Jung LT, Bhuiyan AA
    Comput Methods Programs Biomed, 2017 Oct;149:11-17.
    PMID: 28802326 DOI: 10.1016/j.cmpb.2017.06.021
    BACKGROUND AND OBJECTIVE: Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals.

    METHODS: This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise.

    RESULTS: Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms.

    CONCLUSION: This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters.

  4. Chiew YS, Tan CP, Chase JG, Chiew YW, Desaive T, Ralib AM, et al.
    Comput Methods Programs Biomed, 2018 Apr;157:217-224.
    PMID: 29477430 DOI: 10.1016/j.cmpb.2018.02.007
    BACKGROUND AND OBJECTIVE: Respiratory mechanics estimation can be used to guide mechanical ventilation (MV) but is severely compromised when asynchronous breathing occurs. In addition, asynchrony during MV is often not monitored and little is known about the impact or magnitude of asynchronous breathing towards recovery. Thus, it is important to monitor and quantify asynchronous breathing over every breath in an automated fashion, enabling the ability to overcome the limitations of model-based respiratory mechanics estimation during asynchronous breathing ventilation.

    METHODS: An iterative airway pressure reconstruction (IPR) method is used to reconstruct asynchronous airway pressure waveforms to better match passive breathing airway waveforms using a single compartment model. The reconstructed pressure enables estimation of respiratory mechanics of airway pressure waveform essentially free from asynchrony. Reconstruction enables real-time breath-to-breath monitoring and quantification of the magnitude of the asynchrony (MAsyn).

    RESULTS AND DISCUSSION: Over 100,000 breathing cycles from MV patients with known asynchronous breathing were analyzed. The IPR was able to reconstruct different types of asynchronous breathing. The resulting respiratory mechanics estimated using pressure reconstruction were more consistent with smaller interquartile range (IQR) compared to respiratory mechanics estimated using asynchronous pressure. Comparing reconstructed pressure with asynchronous pressure waveforms quantifies the magnitude of asynchronous breathing, which has a median value MAsyn for the entire dataset of 3.8%.

    CONCLUSION: The iterative pressure reconstruction method is capable of identifying asynchronous breaths and improving respiratory mechanics estimation consistency compared to conventional model-based methods. It provides an opportunity to automate real-time quantification of asynchronous breathing frequency and magnitude that was previously limited to invasively method only.

  5. Palaniappan R, Sundaraj K, Sundaraj S
    Comput Methods Programs Biomed, 2017 Jul;145:67-72.
    PMID: 28552127 DOI: 10.1016/j.cmpb.2017.04.013
    BACKGROUND: The monitoring of the respiratory rate is vital in several medical conditions, including sleep apnea because patients with sleep apnea exhibit an irregular respiratory rate compared with controls. Therefore, monitoring the respiratory rate by detecting the different breath phases is crucial.

    OBJECTIVES: This study aimed to segment the breath cycles from pulmonary acoustic signals using the newly developed adaptive neuro-fuzzy inference system (ANFIS) based on breath phase detection and to subsequently evaluate the performance of the system.

    METHODS: The normalised averaged power spectral density for each segment was fuzzified, and a set of fuzzy rules was formulated. The ANFIS was developed to detect the breath phases and subsequently perform breath cycle segmentation. To evaluate the performance of the proposed method, the root mean square error (RMSE) and correlation coefficient values were calculated and analysed, and the proposed method was then validated using data collected at KIMS Hospital and the RALE standard dataset.

    RESULTS: The analysis of the correlation coefficient of the neuro-fuzzy model, which was performed to evaluate its performance, revealed a correlation strength of r = 0.9925, and the RMSE for the neuro-fuzzy model was found to equal 0.0069.

    CONCLUSION: The proposed neuro-fuzzy model performs better than the fuzzy inference system (FIS) in detecting the breath phases and segmenting the breath cycles and requires less rules than FIS.

  6. Hariharan M, Sindhu R, Vijean V, Yazid H, Nadarajaw T, Yaacob S, et al.
    Comput Methods Programs Biomed, 2018 Mar;155:39-51.
    PMID: 29512503 DOI: 10.1016/j.cmpb.2017.11.021
    BACKGROUND AND OBJECTIVE: Infant cry signal carries several levels of information about the reason for crying (hunger, pain, sleepiness and discomfort) or the pathological status (asphyxia, deaf, jaundice, premature condition and autism, etc.) of an infant and therefore suited for early diagnosis. In this work, combination of wavelet packet based features and Improved Binary Dragonfly Optimization based feature selection method was proposed to classify the different types of infant cry signals.

    METHODS: Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well.

    RESULTS: Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%.

    CONCLUSION: The experimental results indicated that the proposed combination of feature extraction and selection method offers suitable classification accuracy and may be employed to detect the subtle changes in the cry signals.

  7. Tey WK, Kuang YC, Ooi MP, Khoo JJ
    Comput Methods Programs Biomed, 2018 Mar;155:109-120.
    PMID: 29512490 DOI: 10.1016/j.cmpb.2017.12.004
    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.

    BACKGROUND AND OBJECTIVE: Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement.

    METHODS: An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis.

    RESULTS: A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.

    CONCLUSIONS: The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide.

  8. Alsalem MA, Zaidan AA, Zaidan BB, Hashim M, Madhloom HT, Azeez ND, et al.
    Comput Methods Programs Biomed, 2018 May;158:93-112.
    PMID: 29544792 DOI: 10.1016/j.cmpb.2018.02.005
    CONTEXT: Acute leukaemia diagnosis is a field requiring automated solutions, tools and methods and the ability to facilitate early detection and even prediction. Many studies have focused on the automatic detection and classification of acute leukaemia and their subtypes to promote enable highly accurate diagnosis.

    OBJECTIVE: This study aimed to review and analyse literature related to the detection and classification of acute leukaemia. The factors that were considered to improve understanding on the field's various contextual aspects in published studies and characteristics were motivation, open challenges that confronted researchers and recommendations presented to researchers to enhance this vital research area.

    METHODS: We systematically searched all articles about the classification and detection of acute leukaemia, as well as their evaluation and benchmarking, in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 2007 to 2017. These indices were considered to be sufficiently extensive to encompass our field of literature.

    RESULTS: Based on our inclusion and exclusion criteria, 89 articles were selected. Most studies (58/89) focused on the methods or algorithms of acute leukaemia classification, a number of papers (22/89) covered the developed systems for the detection or diagnosis of acute leukaemia and few papers (5/89) presented evaluation and comparative studies. The smallest portion (4/89) of articles comprised reviews and surveys.

    DISCUSSION: Acute leukaemia diagnosis, which is a field requiring automated solutions, tools and methods, entails the ability to facilitate early detection or even prediction. Many studies have been performed on the automatic detection and classification of acute leukaemia and their subtypes to promote accurate diagnosis.

    CONCLUSIONS: Research areas on medical-image classification vary, but they are all equally vital. We expect this systematic review to help emphasise current research opportunities and thus extend and create additional research fields.

  9. Mak NL, Ooi EH, Lau EV, Ooi ET, Pamidi N, Foo JJ, et al.
    Comput Methods Programs Biomed, 2022 Dec;227:107195.
    PMID: 36323179 DOI: 10.1016/j.cmpb.2022.107195
    BACKGROUND AND OBJECTIVES: Thermochemical ablation (TCA) is a thermal ablation technique involving the injection of acid and base, either sequentially or simultaneously, into the target tissue. TCA remains at the conceptual stage with existing studies unable to provide recommendations on the optimum injection rate, and reagent concentration and volume. Limitations in current experimental methodology have prevented proper elucidation of the thermochemical processes inside the tissue during TCA. Nevertheless, the computational TCA framework developed recently by Mak et al. [Mak et al., Computers in Biology and Medicine, 2022, 145:105494] has opened new avenues in the development of TCA. Specifically, a recommended safe dosage is imperative in driving TCA research beyond the conceptual stage.

    METHODS: The aforesaid computational TCA framework for sequential injection was applied and adapted to simulate TCA with simultaneous injection of acid and base at equimolar and equivolume. The developed framework, which describes the flow of acid and base, their neutralisation, the rise in tissue temperature and the formation of thermal damage, was solved numerically using the finite element method. The framework will be used to investigate the effects of injection rate, reagent concentration, volume and type (weak/strong acid-base combination) on temperature rise and thermal coagulation formation.

    RESULTS: A higher injection rate resulted in higher temperature rise and larger thermal coagulation. Reagent concentration of 7500 mol/m3 was found to be optimum in producing considerable thermal coagulation without the risk of tissue overheating. Thermal coagulation volume was found to be consistently larger than the total volume of acid and base injected into the tissue, which is beneficial as it reduces the risk of chemical burn injury. Three multivariate second-order polynomials that express the targeted coagulation volume as functions of injection rate and reagent volume, for the weak-weak, weak-strong and strong-strong acid-base combinations were also derived based on the simulated data.

    CONCLUSIONS: A guideline for a safe and effective implementation of TCA with simultaneous injection of acid and base was recommended based on the numerical results of the computational model developed. The guideline correlates the coagulation volume with the reagent volume and injection rate, and may be used by clinicians in determining the safe dosage of reagents and optimum injection rate to achieve a desired thermal coagulation volume during TCA.

  10. Khan DM, Yahya N, Kamel N, Faye I
    Comput Methods Programs Biomed, 2023 Jan;228:107242.
    PMID: 36423484 DOI: 10.1016/j.cmpb.2022.107242
    BACKGROUND AND OBJECTIVE: Brain connectivity plays a pivotal role in understanding the brain's information processing functions by providing various details including magnitude, direction, and temporal dynamics of inter-neuron connections. While the connectivity may be classified as structural, functional and causal, a complete in-vivo directional analysis is guaranteed by the latter and is referred to as Effective Connectivity (EC). Two most widely used EC techniques are Directed Transfer Function (DTF) and Partial Directed Coherence (PDC) which are based on multivariate autoregressive models. The drawbacks of these techniques include poor frequency resolution and the requirement for experimental approach to determine signal normalization and thresholding techniques in identifying significant connectivities between multivariate sources.

    METHODS: In this study, the drawbacks of DTF and PDC are addressed by proposing a novel technique, termed as Efficient Effective Connectivity (EEC), for the estimation of EC between multivariate sources using AR spectral estimation and Granger causality principle. In EEC, a linear predictive filter with AR coefficients obtained via multivariate EEG is used for signal prediction. This leads to the estimation of full-length signals which are then transformed into frequency domain by using Burg spectral estimation method. Furthermore, the newly proposed normalization method addressed the effect on each source in EEC using the sum of maximum connectivity values over the entire frequency range. Lastly, the proposed dynamic thresholding works by subtracting the first moment of causal effects of all the sources on one source from individual connections present for that source.

    RESULTS: The proposed method is evaluated using synthetic and real resting-state EEG of 46 healthy controls. A 3D-Convolutional Neural Network is trained and tested using the PDC and EEC samples. The result indicates that compared to PDC, EEC improves the EEG eye-state classification accuracy, sensitivity and specificity by 5.57%, 3.15% and 8.74%, respectively.

    CONCLUSION: Correct identification of all connections in synthetic data and improved resting-state classification performance using EEC proved that EEC gives better estimation of directed causality and indicates that it can be used for reliable understanding of brain mechanisms. Conclusively, the proposed technique may open up new research dimensions for clinical diagnosis of mental disorders.

  11. Cheong JK, Ooi EH, Chiew YS, Menichetti L, Armanetti P, Franchini MC, et al.
    Comput Methods Programs Biomed, 2023 Mar;230:107363.
    PMID: 36720181 DOI: 10.1016/j.cmpb.2023.107363
    BACKGROUND AND OBJECTIVES: Gold nanorod-assisted photothermal therapy (GNR-PTT) is a cancer treatment whereby GNRs incorporated into the tumour act as photo-absorbers to elevate the thermal destruction effect. In the case of bladder, there are few possible routes to target the tumour with GNRs, namely peri/intra-tumoural injection and intravesical instillation of GNRs. These two approaches lead to different GNR distribution inside the tumour and can affect the treatment outcome.

    METHODOLOGY: The present study investigates the effects of heterogeneous GNR distribution in a typical setup of GNR-PTT. Three cases were considered. Case 1 considered the GNRs at the tumour centre, while Case 2 represents a hypothetical scenario where GNRs are distributed at the tumour periphery; these two cases represent intratumoural accumulation with different degree of GNR spread inside the tumour. Case 3 is achieved when GNRs target the exposed tumoural surface that is invading the bladder wall, when they are delivered by intravesical instillation.

    RESULTS: Results indicate that for a laser power of 0.6 W and GNR volume fraction of 0.01%, Case 2 and 3 were successful in achieving complete tumour eradication after 330 and 470 s of laser irradiation, respectively. Case 1 failed to form complete tumour damage when the GNRs are concentrated at the tumour centre but managed to produce complete tumour damage if the spread of GNRs is wider. Results from Case 2 also demonstrated a different heating profile from Case 1, suggesting that thermal ablation during GNR-PTT is dependant on the GNRs distribution inside the tumour. Case 3 shows similar results to Case 2 whereby gradual but uniform heating is observed. Cases 2 and 3 show that uniformly heating the tumour can reduce damage to the surrounding tissues.

    CONCLUSIONS: Different GNR distribution associated with the different methods of introducing GNRs to the bladder during GNR-PTT affect the treatment outcome of bladder cancer in mice. Insufficient spreading during intratumoural injection of GNRs can render the treatment ineffective, while administered via intravesical instillation. GNR distribution achieved through intravesical instillation present some advantages over intratumoural injection and is worthy of further exploration.

  12. Akhbar MFA
    Comput Methods Programs Biomed, 2023 Apr;231:107361.
    PMID: 36736133 DOI: 10.1016/j.cmpb.2023.107361
    BACKGROUND AND OBJECTIVE: Conventional surgical drill bits suffer from several drawbacks, including extreme heat generation, breakage, jam, and undesired breakthrough. Understanding the impacts of drill margin on bone damage can provide insights that lay the foundation for improvement in the existing surgical drill bit. However, research on drill margins in bone drilling is lacking. This work assesses the influences of margin height and width on thermomechanical damage in bone drilling.

    METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.

    RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.

    CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.

  13. Cimr D, Fujita H, Tomaskova H, Cimler R, Selamat A
    Comput Methods Programs Biomed, 2023 Feb;229:107277.
    PMID: 36463672 DOI: 10.1016/j.cmpb.2022.107277
    BACKGROUND AND OBJECTIVES: Nowadays, an automated computer-aided diagnosis (CAD) is an approach that plays an important role in the detection of health issues. The main advantages should be in early diagnosis, including high accuracy and low computational complexity without loss of the model performance. One of these systems type is concerned with Electroencephalogram (EEG) signals and seizure detection. We designed a CAD system approach for seizure detection that optimizes the complexity of the required solution while also being reusable on different problems.

    METHODS: The methodology is built-in deep data analysis for normalization. In comparison to previous research, the system does not necessitate a feature extraction process that optimizes and reduces system complexity. The data classification is provided by a designed 8-layer deep convolutional neural network.

    RESULTS: Depending on used data, we have achieved the accuracy, specificity, and sensitivity of 98%, 98%, and 98.5% on the short-term Bonn EEG dataset, and 96.99%, 96.89%, and 97.06% on the long-term CHB-MIT EEG dataset.

    CONCLUSIONS: Through the approach to detection, the system offers an optimized solution for seizure diagnosis health problems. The proposed solution should be implemented in all clinical or home environments for decision support.

  14. Teoh YX, Othmani A, Lai KW, Goh SL, Usman J
    Comput Methods Programs Biomed, 2023 Dec;242:107807.
    PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807
    BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography.

    METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.

    RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.

    CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.

  15. Cimr D, Busovsky D, Fujita H, Studnicka F, Cimler R, Hayashi T
    Comput Methods Programs Biomed, 2023 Sep;239:107623.
    PMID: 37276760 DOI: 10.1016/j.cmpb.2023.107623
    BACKGROUND AND OBJECTIVES: Prediction of patient deterioration is essential in medical care, and its automation may reduce the risk of patient death. The precise monitoring of a patient's medical state requires devices placed on the body, which may cause discomfort. Our approach is based on the processing of long-term ballistocardiography data, which were measured using a sensory pad placed under the patient's mattress.

    METHODS: The investigated dataset was obtained via long-term measurements in retirement homes and intensive care units (ICU). Data were measured unobtrusively using a measuring pad equipped with piezoceramic sensors. The proposed approach focused on the processing methods of the measured ballistocardiographic signals, Cartan curvature (CC), and Euclidean arc length (EAL).

    RESULTS: For analysis, 218,979 normal and 216,259 aberrant 2-second samples were collected and classified using a convolutional neural network. Experiments using cross-validation with expert threshold and data length revealed the accuracy, sensitivity, and specificity of the proposed method to be 86.51 CONCLUSIONS: The proposed method provides a unique approach for an early detection of health concerns in an unobtrusive manner. In addition, the suitability of EAL over the CC was determined.

  16. Othman NA, Azhar MAAS, Damanhuri NS, Mahadi IA, Abbas MH, Shamsuddin SA, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107566.
    PMID: 37186981 DOI: 10.1016/j.cmpb.2023.107566
    BACKGROUND AND OBJECTIVE: The identification of insulinaemic pharmacokinetic parameters using the least-squares criterion approach is easily influenced by outlying data due to its sensitivity. Furthermore, the least-squares criterion has a tendency to overfit and produce incorrect results. Hence, this research proposes an alternative approach using the artificial neural network (ANN) with two hidden layers to optimize the identifying of insulinaemic pharmacokinetic parameters. The ANN is selected for its ability to avoid overfitting parameters and its faster speed in processing data.

    METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.

    RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.

    CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.

  17. Ninomiya K, Arimura H, Tanaka K, Chan WY, Kabata Y, Mizuno S, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107544.
    PMID: 37148668 DOI: 10.1016/j.cmpb.2023.107544
    OBJECTIVES: To elucidate a novel radiogenomics approach using three-dimensional (3D) topologically invariant Betti numbers (BNs) for topological characterization of epidermal growth factor receptor (EGFR) Del19 and L858R mutation subtypes.

    METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.

    RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.

    CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.

  18. Ang CYS, Chiew YS, Wang X, Ooi EH, Nor MBM, Cove ME, et al.
    Comput Methods Programs Biomed, 2023 Oct;240:107728.
    PMID: 37531693 DOI: 10.1016/j.cmpb.2023.107728
    BACKGROUND AND OBJECTIVE: Healthcare datasets are plagued by issues of data scarcity and class imbalance. Clinically validated virtual patient (VP) models can provide accurate in-silico representations of real patients and thus a means for synthetic data generation in hospital critical care settings. This research presents a realistic, time-varying mechanically ventilated respiratory failure VP profile synthesised using a stochastic model.

    METHODS: A stochastic model was developed using respiratory elastance (Ers) data from two clinical cohorts and averaged over 30-minute time intervals. The stochastic model was used to generate future Ers data based on current Ers values with added normally distributed random noise. Self-validation of the VPs was performed via Monte Carlo simulation and retrospective Ers profile fitting. A stochastic VP cohort of temporal Ers evolution was synthesised and then compared to an independent retrospective patient cohort data in a virtual trial across several measured patient responses, where similarity of profiles validates the realism of stochastic model generated VP profiles.

    RESULTS: A total of 120,000 3-hour VPs for pressure control (PC) and volume control (VC) ventilation modes are generated using stochastic simulation. Optimisation of the stochastic simulation process yields an ideal noise percentage of 5-10% and simulation iteration of 200,000 iterations, allowing the simulation of a realistic and diverse set of Ers profiles. Results of self-validation show the retrospective Ers profiles were able to be recreated accurately with a mean squared error of only 0.099 [0.009-0.790]% for the PC cohort and 0.051 [0.030-0.126]% for the VC cohort. A virtual trial demonstrates the ability of the stochastic VP cohort to capture Ers trends within and beyond the retrospective patient cohort providing cohort-level validation.

    CONCLUSION: VPs capable of temporal evolution demonstrate feasibility for use in designing, developing, and optimising bedside MV guidance protocols through in-silico simulation and validation. Overall, the temporal VPs developed using stochastic simulation alleviate the need for lengthy, resource intensive, high cost clinical trials, while facilitating statistically robust virtual trials, ultimately leading to improved patient care and outcomes in mechanical ventilation.

  19. Benyó B, Paláncz B, Szlávecz Á, Szabó B, Kovács K, Chase JG
    Comput Methods Programs Biomed, 2023 Oct;240:107633.
    PMID: 37343375 DOI: 10.1016/j.cmpb.2023.107633
    Model-based glycemic control (GC) protocols are used to treat stress-induced hyperglycaemia in intensive care units (ICUs). The STAR (Stochastic-TARgeted) glycemic control protocol - used in clinical practice in several ICUs in New Zealand, Hungary, Belgium, and Malaysia - is a model-based GC protocol using a patient-specific, model-based insulin sensitivity to describe the patient's actual state. Two neural network based methods are defined in this study to predict the patient's insulin sensitivity parameter: a classification deep neural network and a Mixture Density Network based method. Treatment data from three different patient cohorts are used to train the network models. Accuracy of neural network predictions are compared with the current model- based predictions used to guide care. The prediction accuracy was found to be the same or better than the reference. The authors suggest that these methods may be a promising alternative in model-based clinical treatment for patient state prediction. Still, more research is needed to validate these findings, including in-silico simulations and clinical validation trials.
  20. Xu S, Deo RC, Soar J, Barua PD, Faust O, Homaira N, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107746.
    PMID: 37660550 DOI: 10.1016/j.cmpb.2023.107746
    BACKGROUND AND OBJECTIVE: Obstructive airway diseases, including asthma and Chronic Obstructive Pulmonary Disease (COPD), are two of the most common chronic respiratory health problems. Both of these conditions require health professional expertise in making a diagnosis. Hence, this process is time intensive for healthcare providers and the diagnostic quality is subject to intra- and inter- operator variability. In this study we investigate the role of automated detection of obstructive airway diseases to reduce cost and improve diagnostic quality.

    METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.

    RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.

    CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links