Displaying publications 21 - 40 of 113 in total

Abstract:
Sort:
  1. Sudarshan VK, Mookiah MR, Acharya UR, Chandran V, Molinari F, Fujita H, et al.
    Comput Biol Med, 2016 Feb 1;69:97-111.
    PMID: 26761591 DOI: 10.1016/j.compbiomed.2015.12.006
    Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Mehdy MM, Ng PY, Shair EF, Saleh NIM, Gomes C
    Comput Math Methods Med, 2017;2017:2610628.
    PMID: 28473865 DOI: 10.1155/2017/2610628
    Medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. Automated classifiers could substantially upgrade the diagnosis process, in terms of both accuracy and time requirement by distinguishing benign and malignant patterns automatically. Neural network (NN) plays an important role in this respect, especially in the application of breast cancer detection. Despite the large number of publications that describe the utilization of NN in various medical techniques, only a few reviews are available that guide the development of these algorithms to enhance the detection techniques with respect to specificity and sensitivity. The purpose of this review is to analyze the contents of recently published literature with special attention to techniques and states of the art of NN in medical imaging. We discuss the usage of NN in four different medical imaging applications to show that NN is not restricted to few areas of medicine. Types of NN used, along with the various types of feeding data, have been reviewed. We also address hybrid NN adaptation in breast cancer detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Khalid A, Lim E, Chan BT, Abdul Aziz YF, Chee KH, Yap HJ, et al.
    J Magn Reson Imaging, 2019 04;49(4):1006-1019.
    PMID: 30211445 DOI: 10.1002/jmri.26302
    BACKGROUND: Existing clinical diagnostic and assessment methods could be improved to facilitate early detection and treatment of cardiac dysfunction associated with acute myocardial infarction (AMI) to reduce morbidity and mortality.

    PURPOSE: To develop 3D personalized left ventricular (LV) models and thickening assessment framework for assessing regional wall thickening dysfunction and dyssynchrony in AMI patients.

    STUDY TYPE: Retrospective study, diagnostic accuracy.

    SUBJECTS: Forty-four subjects consisting of 15 healthy subjects and 29 AMI patients.

    FIELD STRENGTH/SEQUENCE: 1.5T/steady-state free precession cine MRI scans; LGE MRI scans.

    ASSESSMENT: Quantitative thickening measurements across all cardiac phases were correlated and validated against clinical evaluation of infarct transmurality by an experienced cardiac radiologist based on the American Heart Association (AHA) 17-segment model.

    STATISTICAL TEST: Nonparametric 2-k related sample-based Kruskal-Wallis test; Mann-Whitney U-test; Pearson's correlation coefficient.

    RESULTS: Healthy LV wall segments undergo significant wall thickening (P 50% transmurality) underwent remarkable wall thinning during contraction (thickening index [TI] = 1.46 ± 0.26 mm) as opposed to healthy myocardium (TI = 4.01 ± 1.04 mm). For AMI patients, LV that showed signs of thinning were found to be associated with a significantly higher percentage of dyssynchrony as compared with healthy subjects (dyssynchrony index [DI] = 15.0 ± 5.0% vs. 7.5 ± 2.0%, P 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Banabilh SM, Suzina AH, Mohamad H, Dinsuhaimi S, Samsudin AR, Singh GD
    Clin Oral Investig, 2010 Oct;14(5):491-8.
    PMID: 19806371 DOI: 10.1007/s00784-009-0342-9
    The aim of the present study is to investigate nasal airway morphology in Asian adults with and without obstructive sleep apnea (OSA) using acoustic rhinometry (AR), principal components analysis (PCA), and 3-D finite-element analysis (FEA). One hundred eight adult Malays aged 18-65 years (mean ± SD, 33.2 ± 13.31) underwent clinical examination and limited channel polysomnography, providing 54 patients with OSA and 54 non-OSA controls. The mean minimal cross section area 1 (MCA1) and the mean minimal cross sectional area 2 (MCA2) were obtained from AR for all subjects and subjected to t tests. The OSA and control nasal airways were reconstructed in 3-D and subjected to PCA and FEA. The mean MCA1 and MCA2 using AR were found to be significantly smaller in the OSA group than in the control group (p < 0.001). Comparing the 3-D OSA and control nasal airways using PCA, the first two eigenvalues accounted for 94% of the total shape change, and statistical differences were found (p < 0.05). Similarly, comparing the nasal airways using FEA, the 3-D mean OSA nasal airway was significantly narrower in the OSA group compared to the control group. Specifically, decreases in size of approx. 10-22% were found in the nasal valve/head of inferior turbinate area. In conclusion, differences in nasal airway morphology are present when comparing patients with OSA to controls. These differences need to be recognized as they can improve our understanding of the etiological basis of obstructive sleep apnea and facilitate its subsequent management.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Arif AS, Mansor S, Logeswaran R, Karim HA
    J Med Syst, 2015 Feb;39(2):5.
    PMID: 25628161 DOI: 10.1007/s10916-015-0200-z
    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  6. Acharya UR, Raghavendra U, Koh JEW, Meiburger KM, Ciaccio EJ, Hagiwara Y, et al.
    Comput Methods Programs Biomed, 2018 Nov;166:91-98.
    PMID: 30415722 DOI: 10.1016/j.cmpb.2018.10.006
    BACKGROUND AND OBJECTIVE: Liver fibrosis is a type of chronic liver injury that is characterized by an excessive deposition of extracellular matrix protein. Early detection of liver fibrosis may prevent further growth toward liver cirrhosis and hepatocellular carcinoma. In the past, the only method to assess liver fibrosis was through biopsy, but this examination is invasive, expensive, prone to sampling errors, and may cause complications such as bleeding. Ultrasound-based elastography is a promising tool to measure tissue elasticity in real time; however, this technology requires an upgrade of the ultrasound system and software. In this study, a novel computer-aided diagnosis tool is proposed to automatically detect and classify the various stages of liver fibrosis based upon conventional B-mode ultrasound images.

    METHODS: The proposed method uses a 2D contourlet transform and a set of texture features that are efficiently extracted from the transformed image. Then, the combination of a kernel discriminant analysis (KDA)-based feature reduction technique and analysis of variance (ANOVA)-based feature ranking technique was used, and the images were then classified into various stages of liver fibrosis.

    RESULTS: Our 2D contourlet transform and texture feature analysis approach achieved a 91.46% accuracy using only four features input to the probabilistic neural network classifier, to classify the five stages of liver fibrosis. It also achieved a 92.16% sensitivity and 88.92% specificity for the same model. The evaluation was done on a database of 762 ultrasound images belonging to five different stages of liver fibrosis.

    CONCLUSIONS: The findings suggest that the proposed method can be useful to automatically detect and classify liver fibrosis, which would greatly assist clinicians in making an accurate diagnosis.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  7. Vicnesh J, Wei JKE, Ciaccio EJ, Oh SL, Bhagat G, Lewis SK, et al.
    J Med Syst, 2019 Apr 26;43(6):157.
    PMID: 31028562 DOI: 10.1007/s10916-019-1285-6
    Celiac disease is a genetically determined disorder of the small intestine, occurring due to an immune response to ingested gluten-containing food. The resulting damage to the small intestinal mucosa hampers nutrient absorption, and is characterized by diarrhea, abdominal pain, and a variety of extra-intestinal manifestations. Invasive and costly methods such as endoscopic biopsy are currently used to diagnose celiac disease. Detection of the disease by histopathologic analysis of biopsies can be challenging due to suboptimal sampling. Video capsule images were obtained from celiac patients and controls for comparison and classification. This study exploits the use of DAISY descriptors to project two-dimensional images onto one-dimensional vectors. Shannon entropy is then used to extract features, after which a particle swarm optimization algorithm coupled with normalization is employed to select the 30 best features for classification. Statistical measures of this paradigm were tabulated. The accuracy, positive predictive value, sensitivity and specificity obtained in distinguishing celiac versus control video capsule images were 89.82%, 89.17%, 94.35% and 83.20% respectively, using the 10-fold cross-validation technique. When employing manual methods rather than the automated means described in this study, technical limitations and inconclusive results may hamper diagnosis. Our findings suggest that the computer-aided detection system presented herein can render diagnostic information, and thus may provide clinicians with an important tool to validate a diagnosis of celiac disease.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Acharya UR, Koh JEW, Hagiwara Y, Tan JH, Gertych A, Vijayananthan A, et al.
    Comput Biol Med, 2018 03 01;94:11-18.
    PMID: 29353161 DOI: 10.1016/j.compbiomed.2017.12.024
    Liver is the heaviest internal organ of the human body and performs many vital functions. Prolonged cirrhosis and fatty liver disease may lead to the formation of benign or malignant lesions in this organ, and an early and reliable evaluation of these conditions can improve treatment outcomes. Ultrasound imaging is a safe, non-invasive, and cost-effective way of diagnosing liver lesions. However, this technique has limited performance in determining the nature of the lesions. This study initiates a computer-aided diagnosis (CAD) system to aid radiologists in an objective and more reliable interpretation of ultrasound images of liver lesions. In this work, we have employed radon transform and bi-directional empirical mode decomposition (BEMD) to extract features from the focal liver lesions. After which, the extracted features were subjected to particle swarm optimization (PSO) technique for the selection of a set of optimized features for classification. Our automated CAD system can differentiate normal, malignant, and benign liver lesions using machine learning algorithms. It was trained using 78 normal, 26 benign and 36 malignant focal lesions of the liver. The accuracy, sensitivity, and specificity of lesion classification were 92.95%, 90.80%, and 97.44%, respectively. The proposed CAD system is fully automatic as no segmentation of region-of-interest (ROI) is required.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Tan JH, Acharya UR, Chua KC, Cheng C, Laude A
    Med Phys, 2016 May;43(5):2311.
    PMID: 27147343 DOI: 10.1118/1.4945413
    The authors propose an algorithm that automatically extracts retinal vasculature and provides a simple measure to correct the extraction. The output of the method is a network of salient points, and blood vessels are drawn by connecting the salient points using a centripetal parameterized Catmull-Rom spline.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Yousef Kalafi E, Tan WB, Town C, Dhillon SK
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):511.
    PMID: 28155722 DOI: 10.1186/s12859-016-1376-z
    BACKGROUND: Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods.

    RESULT: Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%.

    CONCLUSIONS: The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Ali Shah SA, Laude A, Faye I, Tang TB
    J Biomed Opt, 2016 Oct;21(10):101404.
    PMID: 26868326 DOI: 10.1117/1.JBO.21.10.101404
    Microaneurysms (MAs) are known to be the early signs of diabetic retinopathy (DR). An automated MA detection system based on curvelet transform is proposed for color fundus image analysis. Candidates of MA were extracted in two parallel steps. In step one, blood vessels were removed from preprocessed green band image and preliminary MA candidates were selected by local thresholding technique. In step two, based on statistical features, the image background was estimated. The results from the two steps allowed us to identify preliminary MA candidates which were also present in the image foreground. A collection set of features was fed to a rule-based classifier to divide the candidates into MAs and non-MAs. The proposed system was tested with Retinopathy Online Challenge database. The automated system detected 162 MAs out of 336, thus achieved a sensitivity of 48.21% with 65 false positives per image. Counting MA is a means to measure the progression of DR. Hence, the proposed system may be deployed to monitor the progression of DR at early stage in population studies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Husham A, Hazim Alkawaz M, Saba T, Rehman A, Saleh Alghamdi J
    Microsc Res Tech, 2016 Oct;79(10):993-997.
    PMID: 27476682 DOI: 10.1002/jemt.22733
    Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  14. Shah SAA, Tang TB, Faye I, Laude A
    Graefes Arch Clin Exp Ophthalmol, 2017 Aug;255(8):1525-1533.
    PMID: 28474130 DOI: 10.1007/s00417-017-3677-y
    PURPOSE: To propose a new algorithm of blood vessel segmentation based on regional and Hessian features for image analysis in retinal abnormality diagnosis.

    METHODS: Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel.

    RESULTS: The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy.

    CONCLUSIONS: Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Abisha S, Mutawa AM, Murugappan M, Krishnan S
    PLoS One, 2023;18(4):e0284021.
    PMID: 37018344 DOI: 10.1371/journal.pone.0284021
    Different diseases are observed in vegetables, fruits, cereals, and commercial crops by farmers and agricultural experts. Nonetheless, this evaluation process is time-consuming, and initial symptoms are primarily visible at microscopic levels, limiting the possibility of an accurate diagnosis. This paper proposes an innovative method for identifying and classifying infected brinjal leaves using Deep Convolutional Neural Networks (DCNN) and Radial Basis Feed Forward Neural Networks (RBFNN). We collected 1100 images of brinjal leaf disease that were caused by five different species (Pseudomonas solanacearum, Cercospora solani, Alternaria melongenea, Pythium aphanidermatum, and Tobacco Mosaic Virus) and 400 images of healthy leaves from India's agricultural form. First, the original plant leaf is preprocessed by a Gaussian filter to reduce the noise and improve the quality of the image through image enhancement. A segmentation method based on expectation and maximization (EM) is then utilized to segment the leaf's-diseased regions. Next, the discrete Shearlet transform is used to extract the main features of the images such as texture, color, and structure, which are then merged to produce vectors. Lastly, DCNN and RBFNN are used to classify brinjal leaves based on their disease types. The DCNN achieved a mean accuracy of 93.30% (with fusion) and 76.70% (without fusion) compared to the RBFNN (82%-without fusion, 87%-with fusion) in classifying leaf diseases.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Mosleh MA, Baba MS, Malek S, Almaktari RA
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):499.
    PMID: 28155649 DOI: 10.1186/s12859-016-1370-5
    BACKGROUND: Cephalometric analysis and measurements of skull parameters using X-Ray images plays an important role in predicating and monitoring orthodontic treatment. Manual analysis and measurements of cephalometric is considered tedious, time consuming, and subjected to human errors. Several cephalometric systems have been developed to automate the cephalometric procedure; however, no clear insights have been reported about reliability, performance, and usability of those systems. This study utilizes some techniques to evaluate reliability, performance, and usability metric using SUS methods of the developed cephalometric system which has not been reported in previous studies.

    METHODS: In this study a novel system named Ceph-X is developed to computerize the manual tasks of orthodontics during cephalometric measurements. Ceph-X is developed by using image processing techniques with three main models: enhancements X-ray image model, locating landmark model, and computation model. Ceph-X was then evaluated by using X-ray images of 30 subjects (male and female) obtained from University of Malaya hospital. Three orthodontics specialists were involved in the evaluation of accuracy to avoid intra examiner error, and performance for Ceph-X, and 20 orthodontics specialists were involved in the evaluation of the usability, and user satisfaction for Ceph-X by using the SUS approach.

    RESULTS: Statistical analysis for the comparison between the manual and automatic cephalometric approaches showed that Ceph-X achieved a great accuracy approximately 96.6%, with an acceptable errors variation approximately less than 0.5 mm, and 1°. Results showed that Ceph-X increased the specialist performance, and minimized the processing time to obtain cephalometric measurements of human skull. Furthermore, SUS analysis approach showed that Ceph-X has an excellent usability user's feedback.

    CONCLUSIONS: The Ceph-X has proved its reliability, performance, and usability to be used by orthodontists for the analysis, diagnosis, and treatment of cephalometric.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Abu A, Leow LK, Ramli R, Omar H
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):505.
    PMID: 28155645 DOI: 10.1186/s12859-016-1362-5
    BACKGROUND: Taxonomists frequently identify specimen from various populations based on the morphological characteristics and molecular data. This study looks into another invasive process in identification of house shrew (Suncus murinus) using image analysis and machine learning approaches. Thus, an automated identification system is developed to assist and simplify this task. In this study, seven descriptors namely area, convex area, major axis length, minor axis length, perimeter, equivalent diameter and extent which are based on the shape are used as features to represent digital image of skull that consists of dorsal, lateral and jaw views for each specimen. An Artificial Neural Network (ANN) is used as classifier to classify the skulls of S. murinus based on region (northern and southern populations of Peninsular Malaysia) and sex (adult male and female). Thus, specimen classification using Training data set and identification using Testing data set were performed through two stages of ANNs.

    RESULTS: At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community.

    CONCLUSIONS: This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Al-Faris AQ, Ngah UK, Isa NA, Shuaib IL
    J Digit Imaging, 2014 Feb;27(1):133-44.
    PMID: 24100762 DOI: 10.1007/s10278-013-9640-5
    In this paper, an automatic computer-aided detection system for breast magnetic resonance imaging (MRI) tumour segmentation will be presented. The study is focused on tumour segmentation using the modified automatic seeded region growing algorithm with a variation of the automated initial seed and threshold selection methodologies. Prior to that, some pre-processing methodologies are involved. Breast skin is detected and deleted using the integration of two algorithms, namely the level set active contour and morphological thinning. The system is applied and tested on 40 test images from the RIDER breast MRI dataset, the results are evaluated and presented in comparison to the ground truths of the dataset. The analysis of variance (ANOVA) test shows that there is a statistically significance in the performance compared to the previous segmentation approaches that have been tested on the same dataset where ANOVA p values for the evaluation measures' results are less than 0.05, such as: relative overlap (p = 0.0002), misclassification rate (p = 0.045), true negative fraction (p = 0.0001) and sum of true volume fraction (p = 0.0001).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Abas FS, Shana'ah A, Christian B, Hasserjian R, Louissaint A, Pennell M, et al.
    Cytometry A, 2017 06;91(6):609-621.
    PMID: 28110507 DOI: 10.1002/cyto.a.23049
    The advance of high resolution digital scans of pathology slides allowed development of computer based image analysis algorithms that may help pathologists in IHC stains quantification. While very promising, these methods require further refinement before they are implemented in routine clinical setting. Particularly critical is to evaluate algorithm performance in a setting similar to current clinical practice. In this article, we present a pilot study that evaluates the use of a computerized cell quantification method in the clinical estimation of CD3 positive (CD3+) T cells in follicular lymphoma (FL). Our goal is to demonstrate the degree to which computerized quantification is comparable to the practice of estimation by a panel of expert pathologists. The computerized quantification method uses entropy based histogram thresholding to separate brown (CD3+) and blue (CD3-) regions after a color space transformation. A panel of four board-certified hematopathologists evaluated a database of 20 FL images using two different reading methods: visual estimation and manual marking of each CD3+ cell in the images. These image data and the readings provided a reference standard and the range of variability among readers. Sensitivity and specificity measures of the computer's segmentation of CD3+ and CD- T cell are recorded. For all four pathologists, mean sensitivity and specificity measures are 90.97 and 88.38%, respectively. The computerized quantification method agrees more with the manual cell marking as compared to the visual estimations. Statistical comparison between the computerized quantification method and the pathologist readings demonstrated good agreement with correlation coefficient values of 0.81 and 0.96 in terms of Lin's concordance correlation and Spearman's correlation coefficient, respectively. These values are higher than most of those calculated among the pathologists. In the future, the computerized quantification method may be used to investigate the relationship between the overall architectural pattern (i.e., interfollicular vs. follicular) and outcome measures (e.g., overall survival, and time to treatment). © 2017 International Society for Advancement of Cytometry.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links