Affiliations 

  • 1 Department of Computer Science and Software Engineering, Faculty of Science and Information Technology, Jadara University, P.O. Box 733, Irbid, Jordan
  • 2 Imaging and Computational Intelligence (ICI) Group, School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang, Malaysia
  • 3 Software Engineering Department, College of Computer Science & Engineering, Taibah University P.O. Box 344, Madinah 30001, Saudi Arabia
  • 4 Department of Information Technology, Al-Huson University College, Al-Balqa Applied University, P.O. Box 50, Jordan ; School of Computer Sciences, Universiti Sains Malaysia, 11800 Penang, Malaysia
Comput Math Methods Med, 2014;2014:181245.
PMID: 24707316 DOI: 10.1155/2014/181245

Abstract

To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.