Displaying publications 21 - 40 of 135 in total

Abstract:
Sort:
  1. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  2. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  3. Kalafi EY, Anuar MK, Sakharkar MK, Dhillon SK
    Folia Biol (Praha), 2018;64(4):137-143.
    PMID: 30724159
    The process of manual species identification is a daunting task, so much so that the number of taxonomists is seen to be declining. In order to assist taxonomists, many methods and algorithms have been proposed to develop semi-automated and fully automated systems for species identification. While semi-automated tools would require manual intervention by a domain expert, fully automated tools are assumed to be not as reliable as manual or semiautomated identification tools. Hence, in this study we investigate the accuracy of fully automated and semi-automated models for species identification. We have built fully automated and semi-automated species classification models using the monogenean species image dataset. With respect to monogeneans' morphology, they are differentiated based on the morphological characteristics of haptoral bars, anchors, marginal hooks and reproductive organs (male and female copulatory organs). Landmarks (in the semi-automated model) and shape morphometric features (in the fully automated model) were extracted from four monogenean species images, which were then classified using k-nearest neighbour and artificial neural network. In semi-automated models, a classification accuracy of 96.67 % was obtained using the k-nearest neighbour and 97.5 % using the artificial neural network, whereas in fully automated models, a classification accuracy of 90 % was obtained using the k-nearest neighbour and 98.8 % using the artificial neural network. As for the crossvalidation, semi-automated models performed at 91.2 %, whereas fully automated models performed slightly higher at 93.75 %.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Kande GB, Nalluri MR, Manikandan R, Cho J, Veerappampalayam Easwaramoorthy S
    Sci Rep, 2025 Jan 27;15(1):3438.
    PMID: 39870673 DOI: 10.1038/s41598-024-84255-w
    Precise segmentation of retinal vasculature is crucial for the early detection, diagnosis, and treatment of vision-threatening ailments. However, this task is challenging due to limited contextual information, variations in vessel thicknesses, the complexity of vessel structures, and the potential for confusion with lesions. In this paper, we introduce a novel approach, the MSMA Net model, which overcomes these challenges by replacing traditional convolution blocks and skip connections with an improved multi-scale squeeze and excitation block (MSSE Block) and Bottleneck residual paths (B-Res paths) with spatial attention blocks (SAB). Our experimental findings on publicly available datasets of fundus images, specifically DRIVE, STARE, CHASE_DB1, HRF and DR HAGIS consistently demonstrate that our approach outperforms other segmentation techniques, achieving higher accuracy, sensitivity, Dice score, and area under the receiver operator characteristic (AUC) in the segmentation of blood vessels with different thicknesses, even in situations involving diverse contextual information, the presence of coexisting lesions, and intricate vessel morphologies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Chen C, Mat Isa NA, Liu X
    Comput Biol Med, 2025 Feb;185:109507.
    PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507
    This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  6. Nordin H, Abdul Razak B, Mokhtar N, Jamaludin MF, Mehmood A
    PLoS One, 2025;20(1):e0316996.
    PMID: 39854603 DOI: 10.1371/journal.pone.0316996
    Mold defects pose a significant risk to the preservation of valuable fine art paintings, typically arising from fungal growth in humid environments. This paper presents a novel approach for detecting and categorizing mold defects in fine art paintings. The technique leverages a feature extraction method called Derivative Level Thresholding to pinpoint suspicious regions within an image. Subsequently, these regions are classified as mold defects using either morphological filtering or machine learning models such as Classification and Regression Trees (CART) and Linear Discriminant Analysis (LDA). The efficacy of these methods was evaluated using the Mold Features Dataset (MFD) and a separate set of test images. Results indicate that both methods improve the accuracy and precision of mold defect detection compared to no classifier. However, the CART algorithm exhibits superior performance, increasing precision by 32% to 53% while maintaining high accuracy (96%) even with an imbalanced dataset. This innovative method has the potential to transform the approach to managing mold defects in fine art paintings by offering a more precise and efficient means of identification. By enabling early detection of mold defects, this method can play a crucial role in safeguarding these invaluable artworks for future generations.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  7. Pitafi S, Anwar T, Widia IDM, Sharif Z, Yimwadsana B
    PLoS One, 2024;19(12):e0313890.
    PMID: 39700114 DOI: 10.1371/journal.pone.0313890
    Perimeter Intrusion Detection Systems (PIDS) are crucial for protecting any physical locations by detecting and responding to intrusions around its perimeter. Despite the availability of several PIDS, challenges remain in detection accuracy and precise activity classification. To address these challenges, a new machine learning model is developed. This model utilizes the pre-trained InceptionV3 for feature extraction on PID intrusion image dataset, followed by t-SNE for dimensionality reduction and subsequent clustering. When handling high-dimensional data, the existing Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm faces efficiency issues due to its complexity and varying densities. To overcome these limitations, this research enhances the traditional DBSCAN algorithm. In the enhanced DBSCAN, distances between minimal points are determined using an estimation for the epsilon values with the Manhattan distance formula. The effectiveness of the proposed model is evaluated by comparing it to state-of-the-art techniques found in the literature. The analysis reveals that the proposed model achieved a silhouette score of 0.86, while comparative techniques failed to produce similar results. This research contributes to societal security by improving location perimeter protection, and future researchers can utilize the developed model for human activity recognition from image datasets.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  8. Gangwar S, Devi R, Mat Isa NA
    Sci Rep, 2025 Feb 25;15(1):6693.
    PMID: 40000697 DOI: 10.1038/s41598-025-90876-6
    In medical imaging, low-contrast chest X-ray (CXR) images may fail to provide adequate information for accurate visual interpretation and disease diagnosis. Conventional contrast enhancement techniques, such as histogram equalization, often introduce intensity shifts and loss of fine details. This study presents an advanced Exposure Region-Based Modified Adaptive Histogram Equalization (ERBMAHE) method, further optimized using Particle Swarm Optimization (PSO) to enhance contrast, preserve brightness, and strengthen fine details. The ERBMAHE method segments CXR images into underexposed, well-exposed, and overexposed regions using the 9IEC algorithm. The well-exposed region is further divided, generating five histograms. Each region undergoes adaptive contrast enhancement via a novel weighted probability density function (PDF) and power-law transformation to ensure balanced enhancement across different exposure levels. The PSO algorithm is then employed to optimize power-law parameters, further refining contrast enhancement and illumination uniformity while maintaining the natural appearance of medical images. The PSO-ERBMAHE method was tested on 600 Kaggle CXR images and compared against six state-of-the-art techniques. It achieved a superior peak signal-to-noise ratio (PSNR = 31.10 dB), entropy (7.48), feature similarity index (FSIM = 0.98), tenengrad function (TEN = 0.19), quality-aware relative contrast measure (QRCM = 0.10), and contrast ratio, while maintaining a low absolute mean brightness error (AMBE = 0.10). The method effectively enhanced image contrast while preserving brightness and visual quality, as confirmed by medical expert evaluations. The proposed PSO-ERBMAHE method delivers high-quality contrast enhancement in medical imaging, ensuring better visibility of critical anatomical features. By strengthening fine details, maintaining mean brightness, and improving computational efficiency, this technique enhances disease examination and diagnosis, reducing misinterpretation risks and improving clinical decision-making.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  9. Deebani W, Aziz L, Aziz A, Basri WS, Alawad WM, Althubiti SA
    Sci Rep, 2025 Mar 03;15(1):7461.
    PMID: 40032913 DOI: 10.1038/s41598-025-90288-6
    Current breast cancer diagnosis methods often face limitations such as high cost, time consumption, and inter-observer variability. To address these challenges, this research proposes a novel deep learning framework that leverages generative adversarial networks (GANs) for data augmentation and transfer learning to enhance breast cancer classification using convolutional neural networks (CNNs). The framework uses a two-stage augmentation approach. First, a conditional Wasserstein GAN (cWGAN) generates synthetic breast cancer images based on clinical data, enhancing training stability and enabling targeted feature incorporation. Second, traditional augmentation techniques (e.g., rotation, flipping, cropping) are applied to both original and synthetic images. A multi-scale transfer learning technique is also employed, integrating three pre-trained CNNs (DenseNet-201, NasNetMobile, ResNet-101) with a multi-scale feature enrichment scheme, allowing the model to capture features at various scales. The framework was evaluated on the BreakHis dataset, achieving an accuracy of 99.2% for binary classification and 98.5% for multi-class classification, significantly outperforming existing methods. This framework offers a more efficient, cost-effective, and accurate approach for breast cancer diagnosis. Future work will focus on generalizing the framework to clinical datasets and integrating it into diagnostic workflows.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Lai YL, Ang TF, Bhatti UA, Ku CS, Han Q, Por LY
    PLoS One, 2025;20(3):e0317306.
    PMID: 40063649 DOI: 10.1371/journal.pone.0317306
    Underwater vision is essential in numerous applications, such as marine resource surveying, autonomous navigation, objective detection, and target monitoring. However, raw underwater images often suffer from significant color deviations due to light attenuation, presenting challenges for practical use. This systematic literature review examines the latest advancements in color correction methods for underwater image enhancement. The core objectives of the review are to identify and critically analyze existing approaches, highlighting their strengths, limitations, and areas for future research. A comprehensive search across eight scholarly databases resulted in the identification of 67 relevant studies published between 2010 and 2024. These studies introduce 13 distinct methods for enhancing underwater images, which can be categorized into three groups: physical models, non-physical models, and deep learning-based methods. Physical model-based methods aim to reverse the effects of underwater image degradation by simulating the physical processes of light attenuation and scattering. In contrast, non-physical model-based methods focus on manipulating pixel values without modeling these underlying degradation processes. Deep learning-based methods, by leveraging data-driven approaches, aim to learn mappings between degraded and enhanced images through large datasets. However, challenges persist across all categories, including algorithmic limitations, data dependency, computational complexity, and performance variability across diverse underwater environments. This review consolidates the current knowledge, providing a taxonomy of methods while identifying critical research gaps. It emphasizes the need to improve adaptability across diverse underwater conditions and reduce computational complexity for real-time applications. The review findings serve as a guide for future research to overcome these challenges and advance the field of underwater image enhancement.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Srinivasu PN, Kumari GLA, Narahari SC, Ahmed S, Alhumam A
    Sci Rep, 2025 Mar 21;15(1):9828.
    PMID: 40119100 DOI: 10.1038/s41598-025-93505-4
    Accurately identifying bone fractures from the X-ray image is essential to prompt timely and appropriate medical treatment. This research explores the impact of hyperparameters and data augmentation techniques on the performance of the You Only Look Once (YOLO) V10 architecture for bone fracture detection. While YOLO architectures have been widely employed in object detection tasks, recognizing bone fractures, which can appear as subtle and complicated patterns in X-ray images, requires rigorous model tuning. Image augmentation was done using the image unsharp masking approach and contrast-limited adaptive histogram equalization before training the model. The augmented images assist in feature identification and contribute to overall performance of the model. The current study has performed extensive experiments to analyze the influence of hyperparameters like the number of epochs and the learning rate, along with the analysis of the data augmentation on the input data. The experimental outcome has proven that particular hyperparameter combinations, when paired with targeted augmentation strategies, improve the accuracy and precision of fracture detection. It is observed that the proposed model yielded an accuracy of 0.964 on evaluation over the augmented data. The statistical analysis of the classification precision across the augmented and raw images is observed as 0.98 and 0.95, respectively. In comparison with other deep learning models, the empirical evaluation of the YOLO V10 model clearly demonstrates its superior performance over conventional approaches for bone fracture detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Khan MB, Lee XY, Nisar H, Ng CA, Yeap KH, Malik AS
    Adv Exp Med Biol, 2015;823:227-48.
    PMID: 25381111 DOI: 10.1007/978-3-319-10984-8_13
    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Jusman Y, Ng SC, Abu Osman NA
    ScientificWorldJournal, 2014;2014:810368.
    PMID: 24955419 DOI: 10.1155/2014/810368
    Advent of medical image digitalization leads to image processing and computer-aided diagnosis systems in numerous clinical applications. These technologies could be used to automatically diagnose patient or serve as second opinion to pathologists. This paper briefly reviews cervical screening techniques, advantages, and disadvantages. The digital data of the screening techniques are used as data for the computer screening system as replaced in the expert analysis. Four stages of the computer system are enhancement, features extraction, feature selection, and classification reviewed in detail. The computer system based on cytology data and electromagnetic spectra data achieved better accuracy than other data.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Arigbabu OA, Ahmad SM, Adnan WA, Yussof S, Iranmanesh V, Malallah FL
    ScientificWorldJournal, 2014;2014:460973.
    PMID: 25121120 DOI: 10.1155/2014/460973
    Soft biometrics can be used as a prescreening filter, either by using single trait or by combining several traits to aid the performance of recognition systems in an unobtrusive way. In many practical visual surveillance scenarios, facial information becomes difficult to be effectively constructed due to several varying challenges. However, from distance the visual appearance of an object can be efficiently inferred, thereby providing the possibility of estimating body related information. This paper presents an approach for estimating body related soft biometrics; specifically we propose a new approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate. Our evaluation on 1120 frame sets of 80 subjects from a newly compiled dataset shows that the mentioned soft biometric information of human subjects can be adequately predicted from set of frames.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  15. Sim KS, Tan YY, Lai MA, Tso CP, Lim WK
    J Microsc, 2010 Apr 1;238(1):44-56.
    PMID: 20384837 DOI: 10.1111/j.1365-2818.2009.03328.x
    An exponential contrast stretching (ECS) technique is developed to reduce the charging effects on scanning electron microscope images. Compared to some of the conventional histogram equalization methods, such as bi-histogram equalization and recursive mean-separate histogram equalization, the proposed ECS method yields better image compensation. Diode sample chips with insulating and conductive surfaces are used as test samples to evaluate the efficiency of the developed algorithm. The algorithm is implemented in software with a frame grabber card, forming the front-end video capture element.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Noor NM, Rijal OM, Yunus A, Abu-Bakar SA
    Comput Med Imaging Graph, 2010 Mar;34(2):160-6.
    PMID: 19758785 DOI: 10.1016/j.compmedimag.2009.08.005
    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Reza AW, Eswaran C, Hati S
    J Med Syst, 2008 Apr;32(2):147-55.
    PMID: 18461818
    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Logeswaran R, Eswaran C
    J Med Syst, 2006 Apr;30(2):133-8.
    PMID: 16705998
    Many medical examinations involve acquisition of a large series of slice images for 3D reconstruction of the organ of interest. With the paperless hospital concept and telemedicine, there is very heavy utilization of limited electronic storage and transmission bandwidth. This paper proposes model-based compression to reduce the load on such resources, as well as aid diagnosis through the 3D reconstruction of the structures of interest, for images acquired by various modalities, such as MRI, Ultrasound, CT, PET etc. and stored in the DICOM file format. An example implementation for the biliary track in MRCP images is illustrated in the paper. Significant compression gains may be derived from the proposed method, and a suitable mixture of the models and raw images would enhance the patient medical history archives as the models may be stored in the DICOM file format used in most medical archiving systems.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Abu A, Susan LL, Sidhu AS, Dhillon SK
    BMC Bioinformatics, 2013;14:48.
    PMID: 23398696 DOI: 10.1186/1471-2105-14-48
    Digitised monogenean images are usually stored in file system directories in an unstructured manner. In this paper we propose a semantic representation of these images in the form of a Monogenean Haptoral Bar Image (MHBI) ontology, which are annotated with taxonomic classification, diagnostic hard part and image properties. The data we used are basically of the monogenean species found in fish, thus we built a simple Fish ontology to demonstrate how the host (fish) ontology can be linked to the MHBI ontology. This will enable linking of information from the monogenean ontology to the host species found in the fish ontology without changing the underlying schema for either of the ontologies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Safdar A, Khan MA, Shah JH, Sharif M, Saba T, Rehman A, et al.
    Microsc Res Tech, 2019 Sep;82(9):1542-1556.
    PMID: 31209970 DOI: 10.1002/jemt.23320
    Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links