Malaria is one of the serious global health problem, causing widespread sufferings and deaths in various parts of the world. With the large number of cases diagnosed over the year, early detection and accurate diagnosis which facilitates prompt treatment is an essential requirement to control malaria. For centuries now, manual microscopic examination of blood slide remains the gold standard for malaria diagnosis. However, low contrast of the malaria and variable smears quality are some factors that may influence the accuracy of interpretation by microbiologists. In order to reduce this problem, this paper aims to investigate the performance of the proposed contrast enhancement techniques namely, modified global and modified linear contrast stretching as well as the conventional global and linear contrast stretching that have been applied on malaria images of P. vivax species. The results show that the proposed modified global and modified linear contrast stretching techniques have successfully increased the contrast of the parasites and the infected red blood cells compared to the conventional global and linear contrast stretching. Hence, the resultant images would become useful to microbiologists for identification of various stages and species of malaria.
This paper proposes to develop an automated diagnostic system for cervical pre-cancerous. METHODS AND DATA SAMPLES: The proposed automated diagnostic system consists of two parts; an automatic feature extraction and an intelligent diagnostic. In the automatic feature extraction, the system automatically extracts four cervical cells features (i.e. nucleus size, nucleus grey level, cytoplasm size and cytoplasm grey level). A new features extraction algorithm called region-growing-based features extraction (RGBFE) is proposed to extract the cervical cells features. The extracted features will then be fed as input data to the intelligent diagnostic part. A new artificial neural network (ANN) architecture called hierarchical hybrid multilayered perceptron (H(2)MLP) network is proposed to predict the cervical pre-cancerous stage into three classes, namely normal, low grade intra-epithelial squamous lesion (LSIL) and high grade intra-epithelial squamous lesion (HSIL). We empirically assess the capability of the proposed diagnostic system using 550 reported cases (211 normal cases, 143 LSIL cases and 196 HSIL cases).
Based on the Nottingham Histopathology Grading (NHG) system, mitosis cells detection is one of the important criteria to determine the grade of breast carcinoma. Mitosis cells detection is a challenging task due to the heterogeneous microenvironment of breast histopathology images. Recognition of complex and inconsistent objects in the medical images could be achieved by incorporating domain knowledge in the field of interest. In this study, the strategies of the histopathologist and domain knowledge approach were used to guide the development of the image processing framework for automated mitosis cells detection in breast histopathology images. The detection framework starts with color normalization and hyperchromatic nucleus segmentation. Then, a knowledge-assisted false positive reduction method is proposed to eliminate the false positive (i.e., non-mitosis cells). This stage aims to minimize the percentage of false positive and thus increase the F1-score. Next, features extraction was performed. The mitosis candidates were classified using a Support Vector Machine (SVM) classifier. For evaluation purposes, the knowledge-assisted detection framework was tested using two datasets: a custom dataset and a publicly available dataset (i.e., MITOS dataset). The proposed knowledge-assisted false positive reduction method was found promising by eliminating at least 87.1% of false positive in both the dataset producing promising results in the F1-score. Experimental results demonstrate that the knowledge-assisted detection framework can achieve promising results in F1-score (custom dataset: 89.1%; MITOS dataset: 88.9%) and outperforms the recent works.
Osteosarcoma is a common type of bone tumor, particularly prevalent in children and adolescents between the ages of 5 and 25 who are experiencing growth spurts during puberty. Manual delineation of tumor regions in MRI images can be laborious and time-consuming, and results may be subjective and difficult to replicate. Therefore, a convolutional neural network (CNN) was developed to automatically segment osteosarcoma cancerous cells in three types of MRI images. The study consisted of five main stages. First, 3692 DICOM format MRI images were acquired from 46 patients, including T1-weighted, T2-weighted, and T1-weighted with injection of Gadolinium (T1W + Gd) images. Contrast stretching and median filter were applied to enhance image intensity and remove noise, and the pre-processed images were reconstructed into NIfTI format files for deep learning. The MRI images were then transformed to fit the CNN's requirements. A 3D U-Net architecture was proposed with optimized parameters to build an automatic segmentation model capable of segmenting osteosarcoma from the MRI images. The 3D U-Net segmentation model achieved excellent results, with mean dice similarity coefficients (DSC) of 83.75%, 85.45%, and 87.62% for T1W, T2W, and T1W + Gd images, respectively. However, the study found that the proposed method had some limitations, including poorly defined borders, missing lesion portions, and other confounding factors. In summary, an automatic segmentation method based on a CNN has been developed to address the challenge of manually segmenting osteosarcoma cancerous cells in MRI images. While the proposed method showed promise, the study revealed limitations that need to be addressed to improve its efficacy.