Displaying publications 81 - 100 of 113 in total

Abstract:
Sort:
  1. Chen Z, Rajamanickam L, Cao J, Zhao A, Hu X
    PLoS One, 2021;16(12):e0260758.
    PMID: 34879097 DOI: 10.1371/journal.pone.0260758
    This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Reza AW, Eswaran C, Hati S
    J Med Syst, 2008 Apr;32(2):147-55.
    PMID: 18461818
    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Mustapha A, Hussain A, Samad SA, Zulkifley MA, Diyana Wan Zaki WM, Hamid HA
    Biomed Eng Online, 2015;14:6.
    PMID: 25595511 DOI: 10.1186/1475-925X-14-6
    Content-based medical image retrieval (CBMIR) system enables medical practitioners to perform fast diagnosis through quantitative assessment of the visual information of various modalities.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Zreaqat M, Hassan R, Halim AS
    Int J Oral Maxillofac Surg, 2012 Jun;41(6):783-8.
    PMID: 22424709 DOI: 10.1016/j.ijom.2012.02.003
    This comparative cross-sectional study assessed the facial surface dimensions of a group of Malay children with unilateral cleft lip and palate (UCLP) and compared them with a control group. 30 Malay children with UCLP aged 8-10 years and 30 unaffected age-matched children were voluntarily recruited from the Orthodontic Specialist Clinic in Hospital Universiti Sains Malaysia (HUSM). For the cleft group, lip and palate were repaired and assessment was performed prior to alveolar bone grafting and orthodontic treatment. The investigation was carried out using 3D digital stereophotogrammetry. 23 variables and two ratios were compared three-dimensionally between both groups. Statistically significant dimensional differences (P<0.05) were found between the UCLP Malay group and the control group mainly in the nasolabial region. These include increased alar base and alar base root width, shorter upper lip length, and increased nose base/mouth width ratio in the UCLP group. There were significant differences between the facial surface morphology of UCLP Malay children and control subjects. Particular surgical procedures performed during primary surgeries may contribute to these differences and negatively affect the surgical outcome.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Ahmad Fauzi MF, Khansa I, Catignani K, Gordillo G, Sen CK, Gurcan MN
    Comput Biol Med, 2015 May;60:74-85.
    PMID: 25756704 DOI: 10.1016/j.compbiomed.2015.02.015
    An estimated 6.5 million patients in the United States are affected by chronic wounds, with more than US$25 billion and countless hours spent annually for all aspects of chronic wound care. There is a need for an intelligent software tool to analyze wound images, characterize wound tissue composition, measure wound size, and monitor changes in wound in between visits. Performed manually, this process is very time-consuming and subject to intra- and inter-reader variability. In this work, our objective is to develop methods to segment, measure and characterize clinically presented chronic wounds from photographic images. The first step of our method is to generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the segmentation process using either optimal thresholding or region growing. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively; while the white probability map is to detect the white label card for measurement calibration purposes. The innovative aspects of this work include defining a four-dimensional probability map specific to wound characteristics, a computationally efficient method to segment wound images utilizing the probability map, and auto-calibration of wound measurements using the content of the image. These methods were applied to 80 wound images, captured in a clinical setting at the Ohio State University Comprehensive Wound Center, with the ground truth independently generated by the consensus of at least two clinicians. While the mean inter-reader agreement between the readers varied between 67.4% and 84.3%, the computer achieved an average accuracy of 75.1%.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Niazi MKK, Abas FS, Senaras C, Pennell M, Sahiner B, Chen W, et al.
    PLoS One, 2018;13(5):e0196547.
    PMID: 29746503 DOI: 10.1371/journal.pone.0196547
    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Abas FS, Shana'ah A, Christian B, Hasserjian R, Louissaint A, Pennell M, et al.
    Cytometry A, 2017 06;91(6):609-621.
    PMID: 28110507 DOI: 10.1002/cyto.a.23049
    The advance of high resolution digital scans of pathology slides allowed development of computer based image analysis algorithms that may help pathologists in IHC stains quantification. While very promising, these methods require further refinement before they are implemented in routine clinical setting. Particularly critical is to evaluate algorithm performance in a setting similar to current clinical practice. In this article, we present a pilot study that evaluates the use of a computerized cell quantification method in the clinical estimation of CD3 positive (CD3+) T cells in follicular lymphoma (FL). Our goal is to demonstrate the degree to which computerized quantification is comparable to the practice of estimation by a panel of expert pathologists. The computerized quantification method uses entropy based histogram thresholding to separate brown (CD3+) and blue (CD3-) regions after a color space transformation. A panel of four board-certified hematopathologists evaluated a database of 20 FL images using two different reading methods: visual estimation and manual marking of each CD3+ cell in the images. These image data and the readings provided a reference standard and the range of variability among readers. Sensitivity and specificity measures of the computer's segmentation of CD3+ and CD- T cell are recorded. For all four pathologists, mean sensitivity and specificity measures are 90.97 and 88.38%, respectively. The computerized quantification method agrees more with the manual cell marking as compared to the visual estimations. Statistical comparison between the computerized quantification method and the pathologist readings demonstrated good agreement with correlation coefficient values of 0.81 and 0.96 in terms of Lin's concordance correlation and Spearman's correlation coefficient, respectively. These values are higher than most of those calculated among the pathologists. In the future, the computerized quantification method may be used to investigate the relationship between the overall architectural pattern (i.e., interfollicular vs. follicular) and outcome measures (e.g., overall survival, and time to treatment). © 2017 International Society for Advancement of Cytometry.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  8. Mehdy MM, Ng PY, Shair EF, Saleh NIM, Gomes C
    Comput Math Methods Med, 2017;2017:2610628.
    PMID: 28473865 DOI: 10.1155/2017/2610628
    Medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. Automated classifiers could substantially upgrade the diagnosis process, in terms of both accuracy and time requirement by distinguishing benign and malignant patterns automatically. Neural network (NN) plays an important role in this respect, especially in the application of breast cancer detection. Despite the large number of publications that describe the utilization of NN in various medical techniques, only a few reviews are available that guide the development of these algorithms to enhance the detection techniques with respect to specificity and sensitivity. The purpose of this review is to analyze the contents of recently published literature with special attention to techniques and states of the art of NN in medical imaging. We discuss the usage of NN in four different medical imaging applications to show that NN is not restricted to few areas of medicine. Types of NN used, along with the various types of feeding data, have been reviewed. We also address hybrid NN adaptation in breast cancer detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Logeswaran R, Eswaran C
    J Med Syst, 2006 Apr;30(2):133-8.
    PMID: 16705998
    Many medical examinations involve acquisition of a large series of slice images for 3D reconstruction of the organ of interest. With the paperless hospital concept and telemedicine, there is very heavy utilization of limited electronic storage and transmission bandwidth. This paper proposes model-based compression to reduce the load on such resources, as well as aid diagnosis through the 3D reconstruction of the structures of interest, for images acquired by various modalities, such as MRI, Ultrasound, CT, PET etc. and stored in the DICOM file format. An example implementation for the biliary track in MRCP images is illustrated in the paper. Significant compression gains may be derived from the proposed method, and a suitable mixture of the models and raw images would enhance the patient medical history archives as the models may be stored in the DICOM file format used in most medical archiving systems.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Chew HP, Zakian CM, Pretty IA, Ellwood RP
    Caries Res, 2014;48(3):254-62.
    PMID: 24481141 DOI: 10.1159/000354411
    BACKGROUND: Measurement of initial enamel erosion is currently limited to in vitro methods. Optical coherence tomography (OCT) and quantitative light-induced fluorescence (QLF) have been used clinically to study advanced erosion. Little is known about their potential on initial enamel erosion.

    OBJECTIVES: To evaluate the sensitivity of QLF and OCT in detecting initial dental erosion in vitro.

    METHODS: 12 human incisors were embedded in resin except for a window on the buccal surface. Bonding agent was applied to half of the window, creating an exposed and non-exposed area. Baseline measurements were taken with QLF, OCT and surface microhardness. Samples were immersed in orange juice for 60 min and measurements taken stepwise every 10 min. QLF was used to compare the loss of fluorescence between the two areas. The OCT system, OCS1300SS (Thorlabs Ltd.), was used to record the intensity of backscattered light of both areas. Multiple linear regression and paired t test were used to compare the change of the outcome measures.

    RESULTS: All 3 instruments demonstrated significant dose responses with the erosive challenge interval (p < 0.05) and a detection threshold of 10 min from baseline. Thereafter, surface microhardness demonstrated significant changes after every 10 min of erosion, QLF at 4 erosive intervals (20, 40, 50 and 60 min) while OCT at only 2 (50 and 60 min).

    CONCLUSION: It can be concluded that OCT and QLF were able to detect demineralization after 10 min of erosive challenge and could be used to monitor the progression of demineralization of initial enamel erosion in vitro.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Abu A, Susan LL, Sidhu AS, Dhillon SK
    BMC Bioinformatics, 2013;14:48.
    PMID: 23398696 DOI: 10.1186/1471-2105-14-48
    Digitised monogenean images are usually stored in file system directories in an unstructured manner. In this paper we propose a semantic representation of these images in the form of a Monogenean Haptoral Bar Image (MHBI) ontology, which are annotated with taxonomic classification, diagnostic hard part and image properties. The data we used are basically of the monogenean species found in fish, thus we built a simple Fish ontology to demonstrate how the host (fish) ontology can be linked to the MHBI ontology. This will enable linking of information from the monogenean ontology to the host species found in the fish ontology without changing the underlying schema for either of the ontologies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Yousef Kalafi E, Tan WB, Town C, Dhillon SK
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):511.
    PMID: 28155722 DOI: 10.1186/s12859-016-1376-z
    BACKGROUND: Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods.

    RESULT: Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%.

    CONCLUSIONS: The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Chai HY, Wee LK, Swee TT, Salleh ShH, Chea LY
    Biomed Eng Online, 2011;10:87.
    PMID: 21952080 DOI: 10.1186/1475-925X-10-87
    Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  18. Tang JR, Mat Isa NA, Ch'ng ES
    PLoS One, 2015;10(11):e0142830.
    PMID: 26560331 DOI: 10.1371/journal.pone.0142830
    Despite the effectiveness of Pap-smear test in reducing the mortality rate due to cervical cancer, the criteria of the reporting standard of the Pap-smear test are mostly qualitative in nature. This study addresses the issue on how to define the criteria in a more quantitative and definite term. A negative Pap-smear test result, i.e. negative for intraepithelial lesion or malignancy (NILM), is qualitatively defined to have evenly distributed, finely granular chromatin in the nuclei of cervical squamous cells. To quantify this chromatin pattern, this study employed Fuzzy C-Means clustering as the segmentation technique, enabling different degrees of chromatin segmentation to be performed on sample images of non-neoplastic squamous cells. From the simulation results, a model representing the chromatin distribution of non-neoplastic cervical squamous cell is constructed with the following quantitative characteristics: at the best representative sensitivity level 4 based on statistical analysis and human experts' feedbacks, a nucleus of non-neoplastic squamous cell has an average of 67 chromatins with a total area of 10.827 μm2; the average distance between the nearest chromatin pair is 0.508 μm and the average eccentricity of the chromatin is 0.47.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  19. Bilgen M
    Australas Phys Eng Sci Med, 2010 Dec;33(4):357-66.
    PMID: 21110236 DOI: 10.1007/s13246-010-0039-z
    Homogenous strain analysis (HSA) was developed to evaluate regional cardiac function using tagged cine magnetic resonance images of heart. Current cardiac applications of HSA are however limited in accurately detecting tag intersections within the myocardial wall, producing consistent triangulation of tag cells throughout the image series and achieving optimal spatial resolution due to the large size of the triangles. To address these issues, this article introduces a harmonic phase (HARP) interference method. In principle, as in the standard HARP analysis, the method uses harmonic phases associated with the two of the four fundamental peaks in the spectrum of a tagged image. However, the phase associated with each peak is wrapped when estimated digitally. This article shows that special combination of wrapped phases results in an image with unique intensity pattern that can be exploited to automatically detect tag intersections and to produce reliable triangulation with regularly organized partitioning of the mesh for HSA. In addition, the method offers new opportunities and freedom for evaluating myocardial function when the power and angle of the complex filtered spectra are mathematically modified prior to computing the phase. For example, the triangular elements can be shifted spatially by changing the angle and/or their sizes can be reduced by changing the power. Interference patterns obtained under a variety of power and angle conditions were presented and specific features observed in the results were explained. Together, the advanced processing capabilities increase the power of HSA by making the analysis less prone to errors from human interactions. It also allows strain measurements at higher spatial resolution and multi-scale, thereby improving the display methods for better interpretation of the analysis results.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Rahmatullah B, Besar R
    J Med Eng Technol, 2009;33(6):417-25.
    PMID: 19637083 DOI: 10.1080/03091900802451232
    The motivation of this paper is to analyse the efficiency and reliability of our proposed algorithm of femur length (FL) measurement for the estimation of gestational age. The automated methods are divided into the following components: threshold, segmentation and extraction. Each component is examined, and improvements are made with the objective of finding the optimal result for FL measurement. The methods are tested with a total of 200 different digitized ultrasound images from our database collection. Overall, the study shows that the watershed-based segmentation method combined with enhanced femur extraction algorithm and a 12 x 12 block averaging seed-point threshold method perform identically well with the expert measurements for every image tested and superior as compared to a previous method.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links