Displaying all 3 publications

Abstract:
Sort:
  1. Zailan NA, Azizan MM, Hasikin K, Mohd Khairuddin AS, Khairuddin U
    Front Public Health, 2022;10:907280.
    PMID: 36033781 DOI: 10.3389/fpubh.2022.907280
    Due to urbanization, solid waste pollution is an increasing concern for rivers, possibly threatening human health, ecological integrity, and ecosystem services. Riverine management in urban landscapes requires best management practices since the river is a vital component in urban ecological civilization, and it is very imperative to synchronize the connection between urban development and river protection. Thus, the implementation of proper and innovative measures is vital to control garbage pollution in the rivers. A robot that cleans the waste autonomously can be a good solution to manage river pollution efficiently. Identifying and obtaining precise positions of garbage are the most crucial parts of the visual system for a cleaning robot. Computer vision has paved a way for computers to understand and interpret the surrounding objects. The development of an accurate computer vision system is a vital step toward a robotic platform since this is the front-end observation system before consequent manipulation and grasping systems. The scope of this work is to acquire visual information about floating garbage on the river, which is vital in building a robotic platform for river cleaning robots. In this paper, an automated detection system based on the improved You Only Look Once (YOLO) model is developed to detect floating garbage under various conditions, such as fluctuating illumination, complex background, and occlusion. The proposed object detection model has been shown to promote rapid convergence which improves the training time duration. In addition, the proposed object detection model has been shown to improve detection accuracy by strengthening the non-linear feature extraction process. The results showed that the proposed model achieved a mean average precision (mAP) value of 89%. Hence, the proposed model is considered feasible for identifying five classes of garbage, such as plastic bottles, aluminum cans, plastic bags, styrofoam, and plastic containers.
  2. Sukumarran D, Hasikin K, Mohd Khairuddin AS, Ngui R, Wan Sulaiman WY, Vythilingam I, et al.
    Trop Biomed, 2023 Jun 01;40(2):208-219.
    PMID: 37650409 DOI: 10.47665/tb.40.2.013
    Timely and rapid diagnosis is crucial for faster and proper malaria treatment planning. Microscopic examination is the gold standard for malaria diagnosis, where hundreds of millions of blood films are examined annually. However, this method's effectiveness depends on the trained microscopist's skills. With the increasing interest in applying deep learning in malaria diagnosis, this study aims to determine the most suitable deep-learning object detection architecture and their applicability to detect and distinguish red blood cells as either malaria-infected or non-infected cells. The object detectors Yolov4, Faster R-CNN, and SSD 300 are trained with images infected by all five malaria parasites and from four stages of infection with 80/20 train and test data partition. The performance of object detectors is evaluated, and hyperparameters are optimized to select the best-performing model. The best-performing model was also assessed with an independent dataset to verify the models' ability to generalize in different domains. The results show that upon training, the Yolov4 model achieves a precision of 83%, recall of 95%, F1-score of 89%, and mean average precision of 93.87% at a threshold of 0.5. Conclusively, Yolov4 can act as an alternative in detecting the infected cells from whole thin blood smear images. Object detectors can complement a deep learning classification model in detecting infected cells since they eliminate the need to train on single-cell images and have been demonstrated to be more feasible for a different target domain.
  3. Liew YM, Ooi JH, Azman RR, Ganesan D, Zakaria MI, Mohd Khairuddin AS, et al.
    Phys Med, 2024 Jul 11;124:103400.
    PMID: 38996627 DOI: 10.1016/j.ejmp.2024.103400
    BACKGROUND/INTRODUCTION: Traumatic brain injury (TBI) remains a leading cause of disability and mortality, with skull fractures being a frequent and serious consequence. Accurate and rapid diagnosis of these fractures is crucial, yet current manual methods via cranial CT scans are time-consuming and prone to error.

    METHODS: This review paper focuses on the evolution of computer-aided diagnosis (CAD) systems for detecting skull fractures in TBI patients. It critically assesses advancements from feature-based algorithms to modern machine learning and deep learning techniques. We examine current approaches to data acquisition, the use of public datasets, algorithmic strategies, and performance metrics RESULTS: The review highlights the potential of CAD systems to provide quick and reliable diagnostics, particularly outside regular clinical hours and in under-resourced settings. Our discussion encapsulates the challenges inherent in automated skull fracture assessment and suggests directions for future research to enhance diagnostic accuracy and patient care.

    CONCLUSION: With CAD systems, we stand on the cusp of significantly improving TBI management, underscoring the need for continued innovation in this field.

Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links