Displaying publications 81 - 100 of 113 in total

Abstract:
Sort:
  1. Ibrahim RW, Hasan AM, Jalab HA
    Comput Methods Programs Biomed, 2018 Sep;163:21-28.
    PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031
    BACKGROUND AND OBJECTIVES: The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method.

    METHOD: In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative.

    RESULTS: The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%.

    CONCLUSIONS: The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Iqbal U, Wah TY, Habib Ur Rehman M, Mujtaba G, Imran M, Shoaib M
    J Med Syst, 2018 Nov 05;42(12):252.
    PMID: 30397730 DOI: 10.1007/s10916-018-1107-2
    Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Bilal M, Anis H, Khan N, Qureshi I, Shah J, Kadir KA
    Biomed Res Int, 2019;2019:6139785.
    PMID: 31119178 DOI: 10.1155/2019/6139785
    Background: Motion is a major source of blurring and ghosting in recovered MR images. It is more challenging in Dynamic Contrast Enhancement (DCE) MRI because motion effects and rapid intensity changes in contrast agent are difficult to distinguish from each other.

    Material and Methods: In this study, we have introduced a new technique to reduce the motion artifacts, based on data binning and low rank plus sparse (L+S) reconstruction method for DCE MRI. For Data binning, radial k-space data is acquired continuously using the golden-angle radial sampling pattern and grouped into various motion states or bins. The respiratory signal for binning is extracted directly from radially acquired k-space data. A compressed sensing- (CS-) based L+S matrix decomposition model is then used to reconstruct motion sorted DCE MR images. Undersampled free breathing 3D liver and abdominal DCE MR data sets are used to validate the proposed technique.

    Results: The performance of the technique is compared with conventional L+S decomposition qualitatively along with the image sharpness and structural similarity index. Recovered images are visually sharper and have better similarity with reference images.

    Conclusion: L+S decomposition provides improved MR images with data binning as preprocessing step in free breathing scenario. Data binning resolves the respiratory motion by dividing different respiratory positions in multiple bins. It also differentiates the respiratory motion and contrast agent (CA) variations. MR images recovered for each bin are better as compared to the method without data binning.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Ab Hamid F, Che Azemin MZ, Salam A, Aminuddin A, Mohd Daud N, Zahari I
    Curr Eye Res, 2016 Jun;41(6):823-31.
    PMID: 26268475 DOI: 10.3109/02713683.2015.1056375
    PURPOSE: The goal of this study was to provide the empirical evidence of fractal dimension as an indirect measure of retinal vasculature density.

    MATERIALS AND METHODS: Two hundred retinal samples of right eye [57.0% females (n = 114) and 43.0% males (n = 86)] were selected from baseline visit. A custom-written software was used for vessel segmentation. Vessel segmentation is the process of transforming two-dimensional color images into binary images (i.e. black and white pixels). The circular area of approximately 2.6 optic disc radii surrounding the center of optic disc was cropped. The non-vessels fragments were removed. FracLac was used to measure the fractal dimension and vessel density of retinal vessels.

    RESULTS: This study suggested that 14.1% of the region of interest (i.e. approximately 2.6 optic disk radii) comprised retinal vessel structure. Using correlation analysis, vessel density measurement and fractal dimension estimation are linearly and strongly correlated (R = 0.942, R(2) = 0.89, p 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Than JCM, Saba L, Noor NM, Rijal OM, Kassim RM, Yunus A, et al.
    Comput Biol Med, 2017 10 01;89:197-211.
    PMID: 28825994 DOI: 10.1016/j.compbiomed.2017.08.014
    Lung disease risk stratification is important for both diagnosis and treatment planning, particularly in biopsies and radiation therapy. Manual lung disease risk stratification is challenging because of: (a) large lung data sizes, (b) inter- and intra-observer variability of the lung delineation and (c) lack of feature amalgamation during machine learning paradigm. This paper presents a two stage CADx cascaded system consisting of: (a) semi-automated lung delineation subsystem (LDS) for lung region extraction in CT slices followed by (b) morphology-based lung tissue characterization, thereby addressing the above shortcomings. LDS primarily uses entropy-based region extraction while ML-based lung characterization is mainly based on an amalgamation of directional transforms such as Riesz and Gabor along with texture-based features comprising of 100 greyscale features using the K-fold cross-validation protocol (K = 2, 3, 5 and 10). The lung database consisted of 96 patients: 15 normal and 81 diseased. We use five high resolution Computed Tomography (HRCT) levels representing different anatomy landmarks where disease is commonly seen. We demonstrate the amalgamated ML stratification accuracy of 99.53%, an increase of 2% against the conventional non-amalgamation ML system that uses alone Riesz-based feature embedded with feature selection based on feature strength. The robustness of the system was determined based on the reliability and stability that showed a reliability index of 0.99 and the deviation in risk stratification accuracies less than 5%. Our CADx system shows 10% better performance when compared against the mean of five other prominent studies available in the current literature covering over one decade.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Raghavendra U, Gudigar A, Maithri M, Gertych A, Meiburger KM, Yeong CH, et al.
    Comput Biol Med, 2018 04 01;95:55-62.
    PMID: 29455080 DOI: 10.1016/j.compbiomed.2018.02.002
    Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Oung QW, Muthusamy H, Basah SN, Lee H, Vijean V
    J Med Syst, 2017 Dec 29;42(2):29.
    PMID: 29288342 DOI: 10.1007/s10916-017-0877-2
    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Kamarudin ND, Ooi CY, Kawanabe T, Odaguchi H, Kobayashi F
    J Healthc Eng, 2017;2017:7460168.
    PMID: 29065640 DOI: 10.1155/2017/7460168
    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Mehdy MM, Ng PY, Shair EF, Saleh NIM, Gomes C
    Comput Math Methods Med, 2017;2017:2610628.
    PMID: 28473865 DOI: 10.1155/2017/2610628
    Medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. Automated classifiers could substantially upgrade the diagnosis process, in terms of both accuracy and time requirement by distinguishing benign and malignant patterns automatically. Neural network (NN) plays an important role in this respect, especially in the application of breast cancer detection. Despite the large number of publications that describe the utilization of NN in various medical techniques, only a few reviews are available that guide the development of these algorithms to enhance the detection techniques with respect to specificity and sensitivity. The purpose of this review is to analyze the contents of recently published literature with special attention to techniques and states of the art of NN in medical imaging. We discuss the usage of NN in four different medical imaging applications to show that NN is not restricted to few areas of medicine. Types of NN used, along with the various types of feeding data, have been reviewed. We also address hybrid NN adaptation in breast cancer detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Liew TS, Schilthuizen M
    PLoS One, 2016;11(6):e0157069.
    PMID: 27280463 DOI: 10.1371/journal.pone.0157069
    Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Usman OL, Muniyandi RC, Omar K, Mohamad M
    PLoS One, 2021;16(2):e0245579.
    PMID: 33630876 DOI: 10.1371/journal.pone.0245579
    Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia's neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels' intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Hani AF, Kumar D, Malik AS, Razak R
    Magn Reson Imaging, 2013 Sep;31(7):1059-67.
    PMID: 23731535 DOI: 10.1016/j.mri.2013.01.007
    Osteoarthritis is a common joint disorder that is most prevalent in the knee joint. Knee osteoarthritis (OA) can be characterized by the gradual loss of articular cartilage (AC). Formation of lesion, fissures and cracks on the cartilage surface has been associated with degenerative AC and can be measured by morphological assessment. In addition, loss of proteoglycan from extracellular matrix of the AC can be measured at early stage of cartilage degradation by physiological assessment. In this case, a biochemical phenomenon of cartilage is used to assess the changes at early degeneration of AC. In this paper, a method to measure local sodium concentration in AC due to proteoglycan has been investigated. A clinical 1.5-T magnetic resonance imaging (MRI) with multinuclear spectroscopic facility is used to acquire sodium images and quantify local sodium content of AC. An optimised 3D gradient-echo sequence with low echo time has been used for MR scan. The estimated sodium concentration in AC region from four different data sets is found to be ~225±19mmol/l, which matches the values that has been reported for the normal AC. This study shows that sodium images acquired at clinical 1.5-T MRI system can generate an adequate quantitative data that enable the estimation of sodium concentration in AC. We conclude that this method is potentially suitable for non-invasive physiological (sodium content) measurement of articular cartilage.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  17. Hamoud Al-Tamimi MS, Sulong G, Shuaib IL
    Magn Reson Imaging, 2015 Jul;33(6):787-803.
    PMID: 25865822 DOI: 10.1016/j.mri.2015.03.008
    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Khalid A, Lim E, Chan BT, Abdul Aziz YF, Chee KH, Yap HJ, et al.
    J Magn Reson Imaging, 2019 04;49(4):1006-1019.
    PMID: 30211445 DOI: 10.1002/jmri.26302
    BACKGROUND: Existing clinical diagnostic and assessment methods could be improved to facilitate early detection and treatment of cardiac dysfunction associated with acute myocardial infarction (AMI) to reduce morbidity and mortality.

    PURPOSE: To develop 3D personalized left ventricular (LV) models and thickening assessment framework for assessing regional wall thickening dysfunction and dyssynchrony in AMI patients.

    STUDY TYPE: Retrospective study, diagnostic accuracy.

    SUBJECTS: Forty-four subjects consisting of 15 healthy subjects and 29 AMI patients.

    FIELD STRENGTH/SEQUENCE: 1.5T/steady-state free precession cine MRI scans; LGE MRI scans.

    ASSESSMENT: Quantitative thickening measurements across all cardiac phases were correlated and validated against clinical evaluation of infarct transmurality by an experienced cardiac radiologist based on the American Heart Association (AHA) 17-segment model.

    STATISTICAL TEST: Nonparametric 2-k related sample-based Kruskal-Wallis test; Mann-Whitney U-test; Pearson's correlation coefficient.

    RESULTS: Healthy LV wall segments undergo significant wall thickening (P 50% transmurality) underwent remarkable wall thinning during contraction (thickening index [TI] = 1.46 ± 0.26 mm) as opposed to healthy myocardium (TI = 4.01 ± 1.04 mm). For AMI patients, LV that showed signs of thinning were found to be associated with a significantly higher percentage of dyssynchrony as compared with healthy subjects (dyssynchrony index [DI] = 15.0 ± 5.0% vs. 7.5 ± 2.0%, P 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  19. Acharya UR, Raghavendra U, Koh JEW, Meiburger KM, Ciaccio EJ, Hagiwara Y, et al.
    Comput Methods Programs Biomed, 2018 Nov;166:91-98.
    PMID: 30415722 DOI: 10.1016/j.cmpb.2018.10.006
    BACKGROUND AND OBJECTIVE: Liver fibrosis is a type of chronic liver injury that is characterized by an excessive deposition of extracellular matrix protein. Early detection of liver fibrosis may prevent further growth toward liver cirrhosis and hepatocellular carcinoma. In the past, the only method to assess liver fibrosis was through biopsy, but this examination is invasive, expensive, prone to sampling errors, and may cause complications such as bleeding. Ultrasound-based elastography is a promising tool to measure tissue elasticity in real time; however, this technology requires an upgrade of the ultrasound system and software. In this study, a novel computer-aided diagnosis tool is proposed to automatically detect and classify the various stages of liver fibrosis based upon conventional B-mode ultrasound images.

    METHODS: The proposed method uses a 2D contourlet transform and a set of texture features that are efficiently extracted from the transformed image. Then, the combination of a kernel discriminant analysis (KDA)-based feature reduction technique and analysis of variance (ANOVA)-based feature ranking technique was used, and the images were then classified into various stages of liver fibrosis.

    RESULTS: Our 2D contourlet transform and texture feature analysis approach achieved a 91.46% accuracy using only four features input to the probabilistic neural network classifier, to classify the five stages of liver fibrosis. It also achieved a 92.16% sensitivity and 88.92% specificity for the same model. The evaluation was done on a database of 762 ultrasound images belonging to five different stages of liver fibrosis.

    CONCLUSIONS: The findings suggest that the proposed method can be useful to automatically detect and classify liver fibrosis, which would greatly assist clinicians in making an accurate diagnosis.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  20. Al-Faris AQ, Ngah UK, Isa NA, Shuaib IL
    J Digit Imaging, 2014 Feb;27(1):133-44.
    PMID: 24100762 DOI: 10.1007/s10278-013-9640-5
    In this paper, an automatic computer-aided detection system for breast magnetic resonance imaging (MRI) tumour segmentation will be presented. The study is focused on tumour segmentation using the modified automatic seeded region growing algorithm with a variation of the automated initial seed and threshold selection methodologies. Prior to that, some pre-processing methodologies are involved. Breast skin is detected and deleted using the integration of two algorithms, namely the level set active contour and morphological thinning. The system is applied and tested on 40 test images from the RIDER breast MRI dataset, the results are evaluated and presented in comparison to the ground truths of the dataset. The analysis of variance (ANOVA) test shows that there is a statistically significance in the performance compared to the previous segmentation approaches that have been tested on the same dataset where ANOVA p values for the evaluation measures' results are less than 0.05, such as: relative overlap (p = 0.0002), misclassification rate (p = 0.045), true negative fraction (p = 0.0001) and sum of true volume fraction (p = 0.0001).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links