Diabetic retinopathy has become an increasingly important cause of blindness. Nevertheless, vision loss can be prevented from early detection of diabetic retinopathy and monitor with regular examination. Common automatic detection of retinal abnormalities is for microaneurysms, hemorrhages, hard exudates, and cotton wool spot. However, there is a worse case of retinal abnormality, but not much research was done to detect it. It is neovascularization where new blood vessels grow due to extensive lack of oxygen in the retinal capillaries. This paper shows that various combination of techniques such as image normalization, compactness classifier, morphology-based operator, Gaussian filtering, and thresholding techniques were used in developing of neovascularization detection. A function matrix box was added in order to classify the neovascularization from natural blood vessel. A region-based neovascularization classification was attempted as a diagnostic accuracy. The developed method was tested on images from different database sources with varying quality and image resolution. It shows that specificity and sensitivity results were 89.4% and 63.9%, respectively. The proposed approach yield encouraging results for future development.
This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection.
Diabetic retinopathy (DR) is a sight threatening complication due to diabetes mellitus that affects the retina. In this article, a computerised DR grading system, which digitally analyses retinal fundus image, is used to measure foveal avascular zone. A v-fold cross-validation method is applied to the FINDeRS database to evaluate the performance of the DR system. It is shown that the system achieved sensitivity of >84%, specificity of >97% and accuracy of >95% for all DR stages. At high values of sensitivity (>95%), specificity (>97%) and accuracy (>98%) obtained for No DR and severe NPDR/PDR stages, the computerised DR grading system is suitable for early detection of DR and for effective treatment of severe cases.
This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.
The increasing number of diabetic retinopathy (DR) cases world wide demands the development of an automated decision support system for quick and cost-effective screening of DR. We present an automatic screening system for detecting the early stage of DR, which is known as non-proliferative diabetic retinopathy (NPDR). The proposed system involves processing of fundus images for extraction of abnormal signs, such as hard exudates, cotton wool spots, and large plaque of hard exudates. A rule based classifier is used for classifying the DR into two classes, namely, normal and abnormal. The abnormal NPDR is further classified into three levels, namely, mild, moderate, and severe. To evaluate the performance of the proposed decision support framework, the algorithms have been tested on the images of STARE database. The results obtained from this study show that the proposed system can detect the bright lesions with an average accuracy of about 97%. The study further shows promising results in classifying the bright lesions correctly according to NPDR severity levels.
The motivation of this paper is to analyse the efficiency and reliability of our proposed algorithm of femur length (FL) measurement for the estimation of gestational age. The automated methods are divided into the following components: threshold, segmentation and extraction. Each component is examined, and improvements are made with the objective of finding the optimal result for FL measurement. The methods are tested with a total of 200 different digitized ultrasound images from our database collection. Overall, the study shows that the watershed-based segmentation method combined with enhanced femur extraction algorithm and a 12 x 12 block averaging seed-point threshold method perform identically well with the expert measurements for every image tested and superior as compared to a previous method.
This paper presents an approach for breast cancer diagnosis in digital mammogram using curvelet transform. After decomposing the mammogram images in curvelet basis, a special set of the biggest coefficients is extracted as feature vector. The Euclidean distance is then used to construct a supervised classifier. The experimental results gave a 98.59% classification accuracy rate, which indicate that curvelet transformation is a promising tool for analysis and classification of digital mammograms.
We present an efficient method for the fusion of medical captured images using different modalities that enhances the original images and combines the complementary information of the various modalities. The contourlet transform has mainly been employed as a fusion technique for images obtained from equal or different modalities. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in dual-tree complex contourlet transform (DT-CCT) by incorporating directional filter banks (DFB) into the DT-CWT. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. To improve the fused image quality, we propose a new method for fusion rules based on principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency components, PCA method is adopted and for high frequency components, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency components. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.
Skin colour is vital information in dermatological diagnosis as it reflects the pathological condition beneath the skin. It is commonly used to indicate the extent of diseases such as psoriasis, which is indicated by the appearance of red plaques. Although there is no cure for psoriasis, there are many treatment modalities to help control the disease. To evaluate treatment efficacy, the current gold standard method, PASI (Psoriasis Area and Severity Index), is used to determine severity of psoriasis lesion. Erythema (redness) is one parameter in PASI and this condition is assessed visually, thus leading to subjective and inconsistent results. Current methods or instruments that assess erythema have limitations, such as being able to measure erythema well for low pigmented skin (fair skin) but not for highly pigmented skin (dark skin) or vice versa. In this work, we proposed an objective assessment of psoriasis erythema for PASI scoring for different (low to highly pigmented) skin types. The colour of psoriasis lesions are initially obtained by using a chromameter giving the values L*, a*, and b* of CIELAB colour space. The L* value is used to classify skin into three categories: low, medium and highly pigmented skin. The lightness difference (DeltaL*), hue difference (Deltah(ab)), chroma (DeltaC*(ab)) between lesions and the surrounding normal skin are calculated and analysed. It is found that the erythema score of a lesion can be distinguished by their Deltah(ab) value within a particular skin type group. References of lesion with different scores are obtained from the selected lesions by two dermatologists. Results based on 38 lesions from 22 patients with various level of skin pigmentation show that PASI erythema score for different skin types i.e. low (fair skin) to highly pigmented (dark skin) skin types can be determined objectively and consistent with dermatology scoring.
Chromatin morphologies in human breast cancer cells treated with an anti-cancer agent are analyzed at their early stage of programmed cell death or apoptosis. The gray-level images of nuclear chromatin are modelled as random fields. We used two-dimensional isotropic generalized Cauchy field to characterize local self-similarity and global long-range dependence behaviors in the image spatial data. Generalized Cauchy field allows the description of fractal behavior inferred from fractal dimension and the long-range dependence inferred from correlation exponent to be carried out independently. We demonstrated the usefulness of locally self-similar random fields with long-range dependence for modelling chromatin condensation.
Automated computer analysis of magnetic resonance cholangiopancreatography (MRCP) (a focused magnetic resonance imaging sequence for the pancreatobiliary region of the abdomen) images for biliary diseases is a difficult problem because of the large inter- and intrapatient variations in the images, varying acquisition settings, and characteristics of the images, defeating most attempts to produce computer-aided diagnosis systems. This paper proposes a system capable of automated preliminary diagnosis of several diseases affecting the bile ducts in the liver, namely, dilation, stones, tumor, and cyst. The system first identifies the biliary ductal structure present in the MRCP images, and then proceeds to determine the presence or absence of the diseases. Tested on a database of 593 clinical images, the system, which uses visual-based features, has shown to be successful in delivering good performance of 70-90% even in the presence of multiple diseases, and may be useful in aiding medical practitioners in routine MRCP examinations.
Image retrieval at the semantic level mostly depends on image annotation or image classification. Image annotation performance largely depends on three issues: (1) automatic image feature extraction; (2) a semantic image concept modeling; (3) algorithm for semantic image annotation. To address first issue, multilevel features are extracted to construct the feature vector, which represents the contents of the image. To address second issue, domain-dependent concept hierarchy is constructed for interpretation of image semantic concepts. To address third issue, automatic multilevel code generation is proposed for image classification and multilevel image annotation. We make use of the existing image annotation to address second and third issues. Our experiments on a specific domain of X-ray images have given encouraging results.
Information about retinal vasculature morphology is used in grading the severity and progression of diabetic retinopathy. An image analysis system can help ophthalmologists make accurate and efficient diagnoses. This paper presents the development of an image processing algorithm for detecting and reconstructing retinal vasculature. The detection of the vascular structure is achieved by image enhancement using contrast limited adaptive histogram equalization followed by the extraction of the vessels using bottom-hat morphological transformation. For reconstruction of the complete retinal vasculature, a region growing technique based on first-order Gaussian derivative is developed. The technique incorporates both gradient magnitude change and average intensity as the homogeneity criteria that enable the process to adapt to intensity changes and intensity spread over the vasculature region. The reconstruction technique reduces the required number of seeds to near optimal for the region growing process. It also overcomes poor performance of current seed-based methods, especially with low and inconsistent contrast images as normally seen in vasculature regions of fundus images. Simulations of the algorithm on 20 test images from the DRIVE database show that it outperforms many other published methods and achieved an accuracy range (ability to detect both vessel and non-vessel pixels) of 0.91 - 0.95, a sensitivity range (ability to detect vessel pixels) of 0.91 - 0.95 and a specificity range (ability to detect non-vessel pixels) of 0.88 - 0.94.
The data distribution system of this project is divided into two types, which are a Two-PC Image Reconstruction System and a Two-PC Velocity Measurement System. Each data distribution system is investigated to see whether the results' refreshing rate of the corresponding measurement can be greater than the rate obtained by using a single computer in the same measurement system for each application. Each system has its own flow control protocol for controlling how data is distributed within the system in order to speed up the data processing time. This can be done if two PCs work in parallel. The challenge of this project is to define the data flow process and critical timing during data packaging, transferring and extracting in between PCs. If a single computer is used as a data processing unit, a longer time is needed to produce a measurement result. This insufficient real-time result will cause problems in a feedback control process when applying the system in industrial plants. To increase the refreshing rate of the measurement result, an investigation on a data distribution system is performed to replace the existing data processing unit.
This paper will study and evaluate watermarking technique by Zain and Fauzi [1]. Recommendations will then be made to enhance the technique especially in the aspect of recovery or reconstruction rate for medical images. A proposal will also be made for a better distribution of watermark to minimize the distortion of the Region of Interest (ROI). The final proposal will enhance AW-TDR in three aspects; firstly the image quality in the ROI will be improved as the maximum change is only 2 bits in every 4 pixels, or embedding rate of 0.5 bits/pixel. Secondly the recovery rate will also be better since the recovery bits are located outside the region of interest. The disadvantage in this is that, only manipulation done in the ROI will be detected. Thirdly the quality of the reconstructed image will be enhanced since the average of 2 x 2 pixels would be used to reconstruct the tampered image.
In fetal heart monitoring using Doppler ultrasound signals the cardiac information is commonly extracted from non-directional signals. As a consequence often some of the cardiac events cannot be observed clearly which may lead to the incorrect detection of the valve and wall motions. Here, directional signals were simulated to investigate their enhancement of cardiac events, and hence provide clearer information regarding the cardiac activities. First, fetal Doppler ultrasound signals were simulated with signals encoding forward and reverse motion then obtained using a pilot frequency. The simulation results demonstrate that the model has the ability to produce realistic Doppler ultrasound signals and a pilot frequency can be used in the mixing process to produce directional signals that allow the simulated cardiac events to be distinguished clearly and correctly.
Hepatocellular carcinoma (HCC) ranks as the fifth most common cancer with an increasing frequency worldwide. "Nuclear atypia", one of the critical features in histological diagnosis of malignancy and grading of the tumour, is generally ascertained through eyeballing. A study was conducted at the Department of Pathology, University of Malaya Medical Centre to assess whether nuclear area, (surrogate measure for nuclear size) and standard deviation (surrogate measure for nuclear pleomorphism) when objectively measured via computer-linked image analysis differs between (1) benign and malignant liver cells and (2) different grades of HCC. A 4-microm thick H&E stained section of 52 histologically re-confirmed HCC with 36 having benign, non-dysplastic surrounding liver were analysed using the Leica Q550 CW system. 10 consecutive non-overlapping, non-mitotic and non-apoptotic nuclei of HCC and surrounding benign hepatocytes respectively were manually traced at 400x magnification on the computer monitor and the nuclear area for the particular cell computed in arbitrary units by the Leica QWIN software. A total of 360 benign hepatocytic nuclei, 240 low grade HCC and 280 high grade HCC nuclei were traced. The mean nuclear area of the benign hepatocytes (37.3) was significantly smaller (p < 0.05) than that of both low grade (65.2) and high grade HCC (80.0). In addition, the mean nuclear area of high grade HCC was significantly larger (p < 0.05) than the low grade HCC. SD of the nuclear areas was lowest in benign hepatocytes (9.3), intermediate in low grade HCC (25.0) and highest in high grade HCC (25.6). These findings indicate that computer-linked nuclear measurement may be a useful adjunct in differentiating benign from malignant hepatocytes, in particular in small biopsies of well-differentiated tumours, and in predicting survival after surgical resection and transplant.
Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.
Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
Breast cancer is the most common form of cancer among women worldwide. Early detection of breast cancer can increase treatment options and patients' survivability. Mammography is the gold standard for breast imaging and cancer detection. However, due to some limitations of this modality such as low sensitivity especially in dense breasts, other modalities like ultrasound and magnetic resonance imaging are often suggested to achieve additional information. Recently, computer-aided detection or diagnosis (CAD) systems have been developed to help radiologists in order to increase diagnosis accuracy. Generally, a CAD system consists of four stages: (a) preprocessing, (b) segmentation of regions of interest, (c) feature extraction and selection, and finally (d) classification. This paper presents the approaches which are applied to develop CAD systems on mammography and ultrasound images. The performance evaluation metrics of CAD systems are also reviewed.