Celiac disease is a genetically determined disorder of the small intestine, occurring due to an immune response to ingested gluten-containing food. The resulting damage to the small intestinal mucosa hampers nutrient absorption, and is characterized by diarrhea, abdominal pain, and a variety of extra-intestinal manifestations. Invasive and costly methods such as endoscopic biopsy are currently used to diagnose celiac disease. Detection of the disease by histopathologic analysis of biopsies can be challenging due to suboptimal sampling. Video capsule images were obtained from celiac patients and controls for comparison and classification. This study exploits the use of DAISY descriptors to project two-dimensional images onto one-dimensional vectors. Shannon entropy is then used to extract features, after which a particle swarm optimization algorithm coupled with normalization is employed to select the 30 best features for classification. Statistical measures of this paradigm were tabulated. The accuracy, positive predictive value, sensitivity and specificity obtained in distinguishing celiac versus control video capsule images were 89.82%, 89.17%, 94.35% and 83.20% respectively, using the 10-fold cross-validation technique. When employing manual methods rather than the automated means described in this study, technical limitations and inconclusive results may hamper diagnosis. Our findings suggest that the computer-aided detection system presented herein can render diagnostic information, and thus may provide clinicians with an important tool to validate a diagnosis of celiac disease.
Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient's video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine-learning-based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel-level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state-of-the-art approaches. In this research, we have conducted a broad comparison of a number of different state-of-the-art features and classification methods that allows building an efficient and flexible WCE video processing system.