Displaying publications 21 - 40 of 113 in total

Abstract:
Sort:
  1. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Abdullah A, Mahmud MR, Maimunah A, Zulfiqar MA, Saim L, Mazlan R
    Ann Acad Med Singap, 2003 Jul;32(4):442-5.
    PMID: 12968546
    INTRODUCTION: Accurate preoperative imaging of the temporal bone in patients receiving cochlear implants is important. High resolution computed tomography (HRCT) and magnetic resonance (MR) imaging are the 2 preoperative imaging modalities that provide critical information on abnormalities of the otic capsule, pneumatisation of the mastoid, middle ear abnormalities, cochlear ducts patency and presence of cochlear nerve.

    MATERIALS AND METHODS: The HRCT and MR imaging in 46 cochlear implant patients in our department were reviewed.

    RESULTS: Majority of our patients [34 patients (73.9%)] showed normal HRCT of the temporal bone; 5 (10.9%) patients had labyrinthitis ossificans, 2 (4.3%) had Mondini's abnormality and 2 (4.3%) had middle ear effusion. One patient each had high jugular bulb, hypoplasia of the internal auditory canal and single cochlear cavity, respectively.

    CONCLUSION: The above findings contribute significantly to our surgical decisions regarding candidacy for surgery, side selection and surgical technique in cochlear implantation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Hani AF, Kumar D, Malik AS, Razak R
    Magn Reson Imaging, 2013 Sep;31(7):1059-67.
    PMID: 23731535 DOI: 10.1016/j.mri.2013.01.007
    Osteoarthritis is a common joint disorder that is most prevalent in the knee joint. Knee osteoarthritis (OA) can be characterized by the gradual loss of articular cartilage (AC). Formation of lesion, fissures and cracks on the cartilage surface has been associated with degenerative AC and can be measured by morphological assessment. In addition, loss of proteoglycan from extracellular matrix of the AC can be measured at early stage of cartilage degradation by physiological assessment. In this case, a biochemical phenomenon of cartilage is used to assess the changes at early degeneration of AC. In this paper, a method to measure local sodium concentration in AC due to proteoglycan has been investigated. A clinical 1.5-T magnetic resonance imaging (MRI) with multinuclear spectroscopic facility is used to acquire sodium images and quantify local sodium content of AC. An optimised 3D gradient-echo sequence with low echo time has been used for MR scan. The estimated sodium concentration in AC region from four different data sets is found to be ~225±19mmol/l, which matches the values that has been reported for the normal AC. This study shows that sodium images acquired at clinical 1.5-T MRI system can generate an adequate quantitative data that enable the estimation of sodium concentration in AC. We conclude that this method is potentially suitable for non-invasive physiological (sodium content) measurement of articular cartilage.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Raghavendra U, Gudigar A, Maithri M, Gertych A, Meiburger KM, Yeong CH, et al.
    Comput Biol Med, 2018 04 01;95:55-62.
    PMID: 29455080 DOI: 10.1016/j.compbiomed.2018.02.002
    Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Niazi MKK, Abas FS, Senaras C, Pennell M, Sahiner B, Chen W, et al.
    PLoS One, 2018;13(5):e0196547.
    PMID: 29746503 DOI: 10.1371/journal.pone.0196547
    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Sharif KM, Rahman MM, Azmir J, Khatib A, Sabina E, Shamsudin SH, et al.
    Biomed Chromatogr, 2015 Dec;29(12):1826-33.
    PMID: 26033701 DOI: 10.1002/bmc.3503
    Multivariate analysis of thin-layer chromatography (TLC) images was modeled to predict antioxidant activity of Pereskia bleo leaves and to identify the contributing compounds of the activity. TLC was developed in optimized mobile phase using the 'PRISMA' optimization method and the image was then converted to wavelet signals and imported for multivariate analysis. An orthogonal partial least square (OPLS) model was developed consisting of a wavelet-converted TLC image and 2,2-diphynyl-picrylhydrazyl free radical scavenging activity of 24 different preparations of P. bleo as the x- and y-variables, respectively. The quality of the constructed OPLS model (1 + 1 + 0) with one predictive and one orthogonal component was evaluated by internal and external validity tests. The validated model was then used to identify the contributing spot from the TLC plate that was then analyzed by GC-MS after trimethylsilyl derivatization. Glycerol and amine compounds were mainly found to contribute to the antioxidant activity of the sample. An alternative method to predict the antioxidant activity of a new sample of P. bleo leaves has been developed.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Logeswaran R, Eswaran C
    J Med Syst, 2006 Apr;30(2):133-8.
    PMID: 16705998
    Many medical examinations involve acquisition of a large series of slice images for 3D reconstruction of the organ of interest. With the paperless hospital concept and telemedicine, there is very heavy utilization of limited electronic storage and transmission bandwidth. This paper proposes model-based compression to reduce the load on such resources, as well as aid diagnosis through the 3D reconstruction of the structures of interest, for images acquired by various modalities, such as MRI, Ultrasound, CT, PET etc. and stored in the DICOM file format. An example implementation for the biliary track in MRCP images is illustrated in the paper. Significant compression gains may be derived from the proposed method, and a suitable mixture of the models and raw images would enhance the patient medical history archives as the models may be stored in the DICOM file format used in most medical archiving systems.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Chong JWR, Khoo KS, Chew KW, Vo DN, Balakrishnan D, Banat F, et al.
    Bioresour Technol, 2023 Feb;369:128418.
    PMID: 36470491 DOI: 10.1016/j.biortech.2022.128418
    The identification of microalgae species is an important tool in scientific research and commercial application to prevent harmful algae blooms (HABs) and recognizing potential microalgae strains for the bioaccumulation of valuable bioactive ingredients. The aim of this study is to incorporate rapid, high-accuracy, reliable, low-cost, simple, and state-of-the-art identification methods. Thus, increasing the possibility for the development of potential recognition applications, that could identify toxic-producing and valuable microalgae strains. Recently, deep learning (DL) has brought the study of microalgae species identification to a much higher depth of efficiency and accuracy. In doing so, this review paper emphasizes the significance of microalgae identification, and various forms of machine learning algorithms for image classification, followed by image pre-processing techniques, feature extraction, and selection for further classification accuracy. Future prospects over the challenges and improvements of potential DL classification model development, application in microalgae recognition, and image capturing technologies are discussed accordingly.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Shyam Sunder R, Eswaran C, Sriraam N
    Comput Biol Med, 2006 Sep;36(9):958-73.
    PMID: 16026779
    In this paper, 3-D discrete Hartley transform is applied for the compression of two medical modalities, namely, magnetic resonance images and X-ray angiograms and the performance results are compared with those of 3-D discrete cosine and Fourier transforms using the parameters such as PSNR and bit rate. It is shown that the 3-D discrete Hartley transform is better than the other two transforms for magnetic resonance brain images whereas for the X-ray angiograms, the 3-D discrete cosine transform is found to be superior.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Chew HP, Zakian CM, Pretty IA, Ellwood RP
    Caries Res, 2014;48(3):254-62.
    PMID: 24481141 DOI: 10.1159/000354411
    BACKGROUND: Measurement of initial enamel erosion is currently limited to in vitro methods. Optical coherence tomography (OCT) and quantitative light-induced fluorescence (QLF) have been used clinically to study advanced erosion. Little is known about their potential on initial enamel erosion.

    OBJECTIVES: To evaluate the sensitivity of QLF and OCT in detecting initial dental erosion in vitro.

    METHODS: 12 human incisors were embedded in resin except for a window on the buccal surface. Bonding agent was applied to half of the window, creating an exposed and non-exposed area. Baseline measurements were taken with QLF, OCT and surface microhardness. Samples were immersed in orange juice for 60 min and measurements taken stepwise every 10 min. QLF was used to compare the loss of fluorescence between the two areas. The OCT system, OCS1300SS (Thorlabs Ltd.), was used to record the intensity of backscattered light of both areas. Multiple linear regression and paired t test were used to compare the change of the outcome measures.

    RESULTS: All 3 instruments demonstrated significant dose responses with the erosive challenge interval (p < 0.05) and a detection threshold of 10 min from baseline. Thereafter, surface microhardness demonstrated significant changes after every 10 min of erosion, QLF at 4 erosive intervals (20, 40, 50 and 60 min) while OCT at only 2 (50 and 60 min).

    CONCLUSION: It can be concluded that OCT and QLF were able to detect demineralization after 10 min of erosive challenge and could be used to monitor the progression of demineralization of initial enamel erosion in vitro.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Alsaih K, Lemaitre G, Rastgoo M, Massich J, Sidibé D, Meriaudeau F
    Biomed Eng Online, 2017 Jun 07;16(1):68.
    PMID: 28592309 DOI: 10.1186/s12938-017-0352-9
    BACKGROUND: Spectral domain optical coherence tomography (OCT) (SD-OCT) is most widely imaging equipment used in ophthalmology to detect diabetic macular edema (DME). Indeed, it offers an accurate visualization of the morphology of the retina as well as the retina layers.

    METHODS: The dataset used in this study has been acquired by the Singapore Eye Research Institute (SERI), using CIRRUS TM (Carl Zeiss Meditec, Inc., Dublin, CA, USA) SD-OCT device. The dataset consists of 32 OCT volumes (16 DME and 16 normal cases). Each volume contains 128 B-scans with resolution of 1024 px × 512 px, resulting in more than 3800 images being processed. All SD-OCT volumes are read and assessed by trained graders and identified as normal or DME cases based on evaluation of retinal thickening, hard exudates, intraretinal cystoid space formation, and subretinal fluid. Within the DME sub-set, a large number of lesions has been selected to create a rather complete and diverse DME dataset. This paper presents an automatic classification framework for SD-OCT volumes in order to identify DME versus normal volumes. In this regard, a generic pipeline including pre-processing, feature detection, feature representation, and classification was investigated. More precisely, extraction of histogram of oriented gradients and local binary pattern (LBP) features within a multiresolution approach is used as well as principal component analysis (PCA) and bag of words (BoW) representations.

    RESULTS AND CONCLUSION: Besides comparing individual and combined features, different representation approaches and different classifiers are evaluated. The best results are obtained for LBP[Formula: see text] vectors while represented and classified using PCA and a linear-support vector machine (SVM), leading to a sensitivity(SE) and specificity (SP) of 87.5 and 87.5%, respectively.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Wan Ahmad WS, Zaki WM, Ahmad Fauzi MF
    Biomed Eng Online, 2015;14:20.
    PMID: 25889188 DOI: 10.1186/s12938-015-0014-8
    Unsupervised lung segmentation method is one of the mandatory processes in order to develop a Content Based Medical Image Retrieval System (CBMIRS) of CXR. The purpose of the study is to present a robust solution for lung segmentation of standard and mobile chest radiographs using fully automated unsupervised method.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Al-Shabi M, Lan BL, Chan WY, Ng KH, Tan M
    Int J Comput Assist Radiol Surg, 2019 Oct;14(10):1815-1819.
    PMID: 31020576 DOI: 10.1007/s11548-019-01981-7
    PURPOSE: Lung nodules have very diverse shapes and sizes, which makes classifying them as benign/malignant a challenging problem. In this paper, we propose a novel method to predict the malignancy of nodules that have the capability to analyze the shape and size of a nodule using a global feature extractor, as well as the density and structure of the nodule using a local feature extractor.

    METHODS: We propose to use Residual Blocks with a 3 × 3 kernel size for local feature extraction and Non-Local Blocks to extract the global features. The Non-Local Block has the ability to extract global features without using a huge number of parameters. The key idea behind the Non-Local Block is to apply matrix multiplications between features on the same feature maps.

    RESULTS: We trained and validated the proposed method on the LIDC-IDRI dataset which contains 1018 computed tomography scans. We followed a rigorous procedure for experimental setup, namely tenfold cross-validation, and ignored the nodules that had been annotated by

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Than JCM, Saba L, Noor NM, Rijal OM, Kassim RM, Yunus A, et al.
    Comput Biol Med, 2017 10 01;89:197-211.
    PMID: 28825994 DOI: 10.1016/j.compbiomed.2017.08.014
    Lung disease risk stratification is important for both diagnosis and treatment planning, particularly in biopsies and radiation therapy. Manual lung disease risk stratification is challenging because of: (a) large lung data sizes, (b) inter- and intra-observer variability of the lung delineation and (c) lack of feature amalgamation during machine learning paradigm. This paper presents a two stage CADx cascaded system consisting of: (a) semi-automated lung delineation subsystem (LDS) for lung region extraction in CT slices followed by (b) morphology-based lung tissue characterization, thereby addressing the above shortcomings. LDS primarily uses entropy-based region extraction while ML-based lung characterization is mainly based on an amalgamation of directional transforms such as Riesz and Gabor along with texture-based features comprising of 100 greyscale features using the K-fold cross-validation protocol (K = 2, 3, 5 and 10). The lung database consisted of 96 patients: 15 normal and 81 diseased. We use five high resolution Computed Tomography (HRCT) levels representing different anatomy landmarks where disease is commonly seen. We demonstrate the amalgamated ML stratification accuracy of 99.53%, an increase of 2% against the conventional non-amalgamation ML system that uses alone Riesz-based feature embedded with feature selection based on feature strength. The robustness of the system was determined based on the reliability and stability that showed a reliability index of 0.99 and the deviation in risk stratification accuracies less than 5%. Our CADx system shows 10% better performance when compared against the mean of five other prominent studies available in the current literature covering over one decade.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Adeshina AM, Hashim R, Khalid NE, Abidin SZ
    Interdiscip Sci, 2012 Sep;4(3):161-72.
    PMID: 23292689 DOI: 10.1007/s12539-012-0132-y
    CT and MRI scans are widely used in medical diagnosis procedures, but they only produce 2-D images. However, the human anatomical structure, the abnormalities, tumors, tissues and organs are in 3-D. 2-D images from these devices are difficult to interpret because they only show cross-sectional views of the human structure. Consequently, such circumstances require doctors to use their expert experiences in the interpretation of the possible location, size or shape of the abnormalities, even for large datasets of enormous amount of slices. Previously, the concept of reconstructing 2-D images to 3-D was introduced. However, such reconstruction model requires high performance computation, may either be time-consuming or costly. Furthermore, detecting the internal features of human anatomical structure, such as the imaging of the blood vessels, is still an open topic in the computer-aided diagnosis of disorders and pathologies. This paper proposes a volume visualization framework using Compute Unified Device Architecture (CUDA), augmenting the widely proven ray casting technique in terms of superior qualities of images but with slow speed. Considering the rapid development of technology in the medical community, our framework is implemented on Microsoft.NET environment for easy interoperability with other emerging revolutionary tools. The framework was evaluated with brain datasets from the department of Surgery, University of North Carolina, United States, containing around 109 MRA datasets. Uniquely, at a reasonably cheaper cost, our framework achieves immediate reconstruction and obvious mappings of the internal features of human brain, reliable enough for instantaneous locations of possible blockages in the brain blood vessels.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Mookiah MR, Acharya UR, Fujita H, Koh JE, Tan JH, Noronha K, et al.
    Comput Biol Med, 2015 Aug;63:208-18.
    PMID: 26093788 DOI: 10.1016/j.compbiomed.2015.05.019
    Age-related Macular Degeneration (AMD) is an irreversible and chronic medical condition characterized by drusen, Choroidal Neovascularization (CNV) and Geographic Atrophy (GA). AMD is one of the major causes of visual loss among elderly people. It is caused by the degeneration of cells in the macula which is responsible for central vision. AMD can be dry or wet type, however dry AMD is most common. It is classified into early, intermediate and late AMD. The early detection and treatment may help one to stop the progression of the disease. Automated AMD diagnosis may reduce the screening time of the clinicians. In this work, we have introduced LCP to characterize normal and AMD classes using fundus images. Linear Configuration Coefficients (CC) and Pattern Occurrence (PO) features are extracted from fundus images. These extracted features are ranked using p-value of the t-test and fed to various supervised classifiers viz. Decision Tree (DT), Nearest Neighbour (k-NN), Naive Bayes (NB), Probabilistic Neural Network (PNN) and Support Vector Machine (SVM) to classify normal and AMD classes. The performance of the system is evaluated using both private (Kasturba Medical Hospital, Manipal, India) and public domain datasets viz. Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) using ten-fold cross validation. The proposed approach yielded best performance with a highest average accuracy of 97.78%, sensitivity of 98.00% and specificity of 97.50% for STARE dataset using 22 significant features. Hence, this system can be used as an aiding tool to the clinicians during mass eye screening programs to diagnose AMD.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Sim KS, Thong LW, Ting HY, Tso CP
    J Microsc, 2010 Feb;237(2):111-8.
    PMID: 20096041 DOI: 10.1111/j.1365-2818.2009.03325.x
    Interpolation techniques that are used for image magnification to obtain more useful details of the surface such as morphology and mechanical contrast usually rely on the signal information distributed around edges and areas of sharp changes and these signal information can also be used to predict missing details from the sample image. However, many of these interpolation methods tend to smooth or blur out image details around the edges. In the present study, a Lagrange time delay estimation interpolator method is proposed and this method only requires a small filter order and has no noticeable estimation bias. Comparing results with the original scanning electron microscope magnification and results of various other interpolation methods, the Lagrange time delay estimation interpolator is found to be more efficient, more robust and easier to execute.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Faisal A, Ng SC, Goh SL, Lai KW
    Med Biol Eng Comput, 2018 Apr;56(4):657-669.
    PMID: 28849317 DOI: 10.1007/s11517-017-1710-2
    Quantitative thickness computation of knee cartilage in ultrasound images requires segmentation of a monotonous hypoechoic band between the soft tissue-cartilage interface and the cartilage-bone interface. Speckle noise and intensity bias captured in the ultrasound images often complicates the segmentation task. This paper presents knee cartilage segmentation using locally statistical level set method (LSLSM) and thickness computation using normal distance. Comparison on several level set methods in the attempt of segmenting the knee cartilage shows that LSLSM yields a more satisfactory result. When LSLSM was applied to 80 datasets, the qualitative segmentation assessment indicates a substantial agreement with Cohen's κ coefficient of 0.73. The quantitative validation metrics of Dice similarity coefficient and Hausdorff distance have average values of 0.91 ± 0.01 and 6.21 ± 0.59 pixels, respectively. These satisfactory segmentation results are making the true thickness between two interfaces of the cartilage possible to be computed based on the segmented images. The measured cartilage thickness ranged from 1.35 to 2.42 mm with an average value of 1.97 ± 0.11 mm, reflecting the robustness of the segmentation algorithm to various cartilage thickness. These results indicate a potential application of the methods described for assessment of cartilage degeneration where changes in the cartilage thickness can be quantified over time by comparing the true thickness at a certain time interval.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Ghanizadeh A, Abarghouei AA, Sinaie S, Saad P, Shamsuddin SM
    Appl Opt, 2011 Jul 1;50(19):3191-200.
    PMID: 21743518 DOI: 10.1364/AO.50.003191
    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links