Displaying publications 21 - 40 of 113 in total

Abstract:
Sort:
  1. Safdar A, Khan MA, Shah JH, Sharif M, Saba T, Rehman A, et al.
    Microsc Res Tech, 2019 Sep;82(9):1542-1556.
    PMID: 31209970 DOI: 10.1002/jemt.23320
    Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Al-Shabi M, Lan BL, Chan WY, Ng KH, Tan M
    Int J Comput Assist Radiol Surg, 2019 Oct;14(10):1815-1819.
    PMID: 31020576 DOI: 10.1007/s11548-019-01981-7
    PURPOSE: Lung nodules have very diverse shapes and sizes, which makes classifying them as benign/malignant a challenging problem. In this paper, we propose a novel method to predict the malignancy of nodules that have the capability to analyze the shape and size of a nodule using a global feature extractor, as well as the density and structure of the nodule using a local feature extractor.

    METHODS: We propose to use Residual Blocks with a 3 × 3 kernel size for local feature extraction and Non-Local Blocks to extract the global features. The Non-Local Block has the ability to extract global features without using a huge number of parameters. The key idea behind the Non-Local Block is to apply matrix multiplications between features on the same feature maps.

    RESULTS: We trained and validated the proposed method on the LIDC-IDRI dataset which contains 1018 computed tomography scans. We followed a rigorous procedure for experimental setup, namely tenfold cross-validation, and ignored the nodules that had been annotated by

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Husham A, Hazim Alkawaz M, Saba T, Rehman A, Saleh Alghamdi J
    Microsc Res Tech, 2016 Oct;79(10):993-997.
    PMID: 27476682 DOI: 10.1002/jemt.22733
    Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Tai MW, Chong ZF, Asif MK, Rahmat RA, Nambiar P
    Leg Med (Tokyo), 2016 Sep;22:42-8.
    PMID: 27591538 DOI: 10.1016/j.legalmed.2016.07.009
    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Niazi MKK, Abas FS, Senaras C, Pennell M, Sahiner B, Chen W, et al.
    PLoS One, 2018;13(5):e0196547.
    PMID: 29746503 DOI: 10.1371/journal.pone.0196547
    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Ibrahim MF, Ahmad Sa'ad FS, Zakaria A, Md Shakaff AY
    Sensors (Basel), 2016 Oct 27;16(11).
    PMID: 27801799
    The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Chen Z, Rajamanickam L, Cao J, Zhao A, Hu X
    PLoS One, 2021;16(12):e0260758.
    PMID: 34879097 DOI: 10.1371/journal.pone.0260758
    This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Siddiqui MF, Reza AW, Shafique A, Omer H, Kanesan J
    Magn Reson Imaging, 2017 12;44:82-91.
    PMID: 28855113 DOI: 10.1016/j.mri.2017.08.005
    Sensitivity Encoding (SENSE) is a widely used technique in Parallel Magnetic Resonance Imaging (MRI) to reduce scan time. Reconfigurable hardware based architecture for SENSE can potentially provide image reconstruction with much less computation time. Application specific hardware platform for SENSE may dramatically increase the power efficiency of the system and can decrease the execution time to obtain MR images. A new implementation of SENSE on Field Programmable Gate Array (FPGA) is presented in this study, which provides real-time SENSE reconstruction right on the receiver coil data acquisition system with no need to transfer the raw data to the MRI server, thereby minimizing the transmission noise and memory usage. The proposed SENSE architecture can reconstruct MR images using receiver coil sensitivity maps obtained using pre-scan and eigenvector (E-maps) methods. The results show that the proposed system consumes remarkably less computation time for SENSE reconstruction, i.e., 0.164ms @ 200MHz, while maintaining the quality of the reconstructed images with good mean SNR (29+ dB), less RMSE (<5×10-2) and comparable artefact power (<9×10-4) to conventional SENSE reconstruction. A comparison of the center line profiles of the reconstructed and reference images also indicates a good quality of the reconstructed images. Furthermore, the results indicate that the proposed architectural design can prove to be a significant tool for SENSE reconstruction in modern MRI scanners and its low power consumption feature can be remarkable for portable MRI scanners.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Faisal A, Ng SC, Goh SL, Lai KW
    Med Biol Eng Comput, 2018 Apr;56(4):657-669.
    PMID: 28849317 DOI: 10.1007/s11517-017-1710-2
    Quantitative thickness computation of knee cartilage in ultrasound images requires segmentation of a monotonous hypoechoic band between the soft tissue-cartilage interface and the cartilage-bone interface. Speckle noise and intensity bias captured in the ultrasound images often complicates the segmentation task. This paper presents knee cartilage segmentation using locally statistical level set method (LSLSM) and thickness computation using normal distance. Comparison on several level set methods in the attempt of segmenting the knee cartilage shows that LSLSM yields a more satisfactory result. When LSLSM was applied to 80 datasets, the qualitative segmentation assessment indicates a substantial agreement with Cohen's κ coefficient of 0.73. The quantitative validation metrics of Dice similarity coefficient and Hausdorff distance have average values of 0.91 ± 0.01 and 6.21 ± 0.59 pixels, respectively. These satisfactory segmentation results are making the true thickness between two interfaces of the cartilage possible to be computed based on the segmented images. The measured cartilage thickness ranged from 1.35 to 2.42 mm with an average value of 1.97 ± 0.11 mm, reflecting the robustness of the segmentation algorithm to various cartilage thickness. These results indicate a potential application of the methods described for assessment of cartilage degeneration where changes in the cartilage thickness can be quantified over time by comparing the true thickness at a certain time interval.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Acharya UR, Koh JEW, Hagiwara Y, Tan JH, Gertych A, Vijayananthan A, et al.
    Comput Biol Med, 2018 03 01;94:11-18.
    PMID: 29353161 DOI: 10.1016/j.compbiomed.2017.12.024
    Liver is the heaviest internal organ of the human body and performs many vital functions. Prolonged cirrhosis and fatty liver disease may lead to the formation of benign or malignant lesions in this organ, and an early and reliable evaluation of these conditions can improve treatment outcomes. Ultrasound imaging is a safe, non-invasive, and cost-effective way of diagnosing liver lesions. However, this technique has limited performance in determining the nature of the lesions. This study initiates a computer-aided diagnosis (CAD) system to aid radiologists in an objective and more reliable interpretation of ultrasound images of liver lesions. In this work, we have employed radon transform and bi-directional empirical mode decomposition (BEMD) to extract features from the focal liver lesions. After which, the extracted features were subjected to particle swarm optimization (PSO) technique for the selection of a set of optimized features for classification. Our automated CAD system can differentiate normal, malignant, and benign liver lesions using machine learning algorithms. It was trained using 78 normal, 26 benign and 36 malignant focal lesions of the liver. The accuracy, sensitivity, and specificity of lesion classification were 92.95%, 90.80%, and 97.44%, respectively. The proposed CAD system is fully automatic as no segmentation of region-of-interest (ROI) is required.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Pathan RK, Biswas M, Yasmin S, Khandaker MU, Salman M, Youssef AAF
    Sci Rep, 2023 Oct 09;13(1):16975.
    PMID: 37813932 DOI: 10.1038/s41598-023-43852-x
    Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, et al.
    Comput Intell Neurosci, 2023;2023:4208231.
    PMID: 36756163 DOI: 10.1155/2023/4208231
    Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  14. Peng P, Wu D, Huang LJ, Wang J, Zhang L, Wu Y, et al.
    Interdiscip Sci, 2024 Mar;16(1):39-57.
    PMID: 37486420 DOI: 10.1007/s12539-023-00580-0
    Breast cancer is commonly diagnosed with mammography. Using image segmentation algorithms to separate lesion areas in mammography can facilitate diagnosis by doctors and reduce their workload, which has important clinical significance. Because large, accurately labeled medical image datasets are difficult to obtain, traditional clustering algorithms are widely used in medical image segmentation as an unsupervised model. Traditional unsupervised clustering algorithms have limited learning knowledge. Moreover, some semi-supervised fuzzy clustering algorithms cannot fully mine the information of labeled samples, which results in insufficient supervision. When faced with complex mammography images, the above algorithms cannot accurately segment lesion areas. To address this, a semi-supervised fuzzy clustering based on knowledge weighting and cluster center learning (WSFCM_V) is presented. According to prior knowledge, three learning modes are proposed: a knowledge weighting method for cluster centers, Euclidean distance weights for unlabeled samples, and learning from the cluster centers of labeled sample sets. These strategies improve the clustering performance. On real breast molybdenum target images, the WSFCM_V algorithm is compared with currently popular semi-supervised and unsupervised clustering algorithms. WSFCM_V has the best evaluation index values. Experimental results demonstrate that compared with the existing clustering algorithms, WSFCM_V has a higher segmentation accuracy than other clustering algorithms, both for larger lesion regions like tumor areas and for smaller lesion areas like calcification point areas.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  15. Alsaih K, Lemaitre G, Rastgoo M, Massich J, Sidibé D, Meriaudeau F
    Biomed Eng Online, 2017 Jun 07;16(1):68.
    PMID: 28592309 DOI: 10.1186/s12938-017-0352-9
    BACKGROUND: Spectral domain optical coherence tomography (OCT) (SD-OCT) is most widely imaging equipment used in ophthalmology to detect diabetic macular edema (DME). Indeed, it offers an accurate visualization of the morphology of the retina as well as the retina layers.

    METHODS: The dataset used in this study has been acquired by the Singapore Eye Research Institute (SERI), using CIRRUS TM (Carl Zeiss Meditec, Inc., Dublin, CA, USA) SD-OCT device. The dataset consists of 32 OCT volumes (16 DME and 16 normal cases). Each volume contains 128 B-scans with resolution of 1024 px × 512 px, resulting in more than 3800 images being processed. All SD-OCT volumes are read and assessed by trained graders and identified as normal or DME cases based on evaluation of retinal thickening, hard exudates, intraretinal cystoid space formation, and subretinal fluid. Within the DME sub-set, a large number of lesions has been selected to create a rather complete and diverse DME dataset. This paper presents an automatic classification framework for SD-OCT volumes in order to identify DME versus normal volumes. In this regard, a generic pipeline including pre-processing, feature detection, feature representation, and classification was investigated. More precisely, extraction of histogram of oriented gradients and local binary pattern (LBP) features within a multiresolution approach is used as well as principal component analysis (PCA) and bag of words (BoW) representations.

    RESULTS AND CONCLUSION: Besides comparing individual and combined features, different representation approaches and different classifiers are evaluated. The best results are obtained for LBP[Formula: see text] vectors while represented and classified using PCA and a linear-support vector machine (SVM), leading to a sensitivity(SE) and specificity (SP) of 87.5 and 87.5%, respectively.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH
    Tomography, 2023 Dec 05;9(6):2158-2189.
    PMID: 38133073 DOI: 10.3390/tomography9060169
    Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  17. Abdullah KA, McEntee MF, Reed W, Kench PL
    J Med Imaging Radiat Oncol, 2016 Aug;60(4):459-68.
    PMID: 27241506 DOI: 10.1111/1754-9485.12473
    The aim of this systematic review is to evaluate the radiation dose reduction achieved using iterative reconstruction (IR) compared to filtered back projection (FBP) in coronary CT angiography (CCTA) and assess the impact on diagnostic image quality. A systematic search of seven electronic databases was performed to identify all studies using a developed keywords strategy. A total of 14 studies met the criteria and were included in a review analysis. The results showed that there was a significant reduction in radiation dose when using IR compared to FBP (P  0.05). The mean ± SD difference of image noise, signal-noise ratio (SNR) and contrast-noise ratio (CNR) were 1.05 ± 1.29 HU, 0.88 ± 0.56 and 0.63 ± 1.83 respectively. The mean ± SD percentages of overall image quality scores were 71.79 ± 12.29% (FBP) and 67.31 ± 22.96% (IR). The mean ± SD percentages of coronary segment analysis were 95.43 ± 2.57% (FBP) and 97.19 ± 2.62% (IR). In conclusion, this review analysis shows that CCTA with the use of IR leads to a significant reduction in radiation dose as compared to the use of FBP. Diagnostic image quality of IR at reduced dose (30-41%) is comparable to FBP at standard dose in the diagnosis of CAD.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Ahmad Fauzi MF, Khansa I, Catignani K, Gordillo G, Sen CK, Gurcan MN
    Comput Biol Med, 2015 May;60:74-85.
    PMID: 25756704 DOI: 10.1016/j.compbiomed.2015.02.015
    An estimated 6.5 million patients in the United States are affected by chronic wounds, with more than US$25 billion and countless hours spent annually for all aspects of chronic wound care. There is a need for an intelligent software tool to analyze wound images, characterize wound tissue composition, measure wound size, and monitor changes in wound in between visits. Performed manually, this process is very time-consuming and subject to intra- and inter-reader variability. In this work, our objective is to develop methods to segment, measure and characterize clinically presented chronic wounds from photographic images. The first step of our method is to generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the segmentation process using either optimal thresholding or region growing. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively; while the white probability map is to detect the white label card for measurement calibration purposes. The innovative aspects of this work include defining a four-dimensional probability map specific to wound characteristics, a computationally efficient method to segment wound images utilizing the probability map, and auto-calibration of wound measurements using the content of the image. These methods were applied to 80 wound images, captured in a clinical setting at the Ohio State University Comprehensive Wound Center, with the ground truth independently generated by the consensus of at least two clinicians. While the mean inter-reader agreement between the readers varied between 67.4% and 84.3%, the computer achieved an average accuracy of 75.1%.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Mustapha A, Hussain A, Samad SA, Zulkifley MA, Diyana Wan Zaki WM, Hamid HA
    Biomed Eng Online, 2015;14:6.
    PMID: 25595511 DOI: 10.1186/1475-925X-14-6
    Content-based medical image retrieval (CBMIR) system enables medical practitioners to perform fast diagnosis through quantitative assessment of the visual information of various modalities.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Gong P, Chin L, Es'haghian S, Liew YM, Wood FM, Sampson DD, et al.
    J Biomed Opt, 2014 Dec;19(12):126014.
    PMID: 25539060 DOI: 10.1117/1.JBO.19.12.126014
    We demonstrate the in vivo assessment of human scars by parametric imaging of birefringence using polarization-sensitive optical coherence tomography (PS-OCT). Such in vivo assessment is subject to artifacts in the detected birefringence caused by scattering from blood vessels. To reduce these artifacts, we preprocessed the PS-OCT data using a vascular masking technique. The birefringence of the remaining tissue regions was then automatically quantified. Results from the scars and contralateral or adjacent normal skin of 13 patients show a correspondence of birefringence with scar type: the ratio of birefringence of hypertrophic scars to corresponding normal skin is 2.2 ± 0.2 (mean ± standard deviation ), while the ratio of birefringence of normotrophic scars to normal skin is 1.1 ± 0.4 . This method represents a new clinically applicable means for objective, quantitative human scar assessment.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links