Affiliations 

  • 1 Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
  • 2 School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
Diagnostics (Basel), 2023 Jul 05;13(13).
PMID: 37443674 DOI: 10.3390/diagnostics13132280

Abstract

Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.