METHODS: In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps.
RESULTS: The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively.
CONCLUSIONS: These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
METHODS: In this paper, we analyze four wide-spread deep learning models designed for the segmentation of three retinal fluids outputting dense predictions in the RETOUCH challenge data. We aim to demonstrate how a patch-based approach could push the performance for each method. Besides, we also evaluate the methods using the OPTIMA challenge dataset for generalizing network performance. The analysis is driven into two sections: the comparison between the four approaches and the significance of patching the images.
RESULTS: The performance of networks trained on the RETOUCH dataset is higher than human performance. The analysis further generalized the performance of the best network obtained by fine-tuning it and achieved a mean Dice similarity coefficient (DSC) of 0.85. Out of the three types of fluids, intraretinal fluid (IRF) is more recognized, and the highest DSC value of 0.922 is achieved using Spectralis dataset. Additionally, the highest average DSC score is 0.84, which is achieved by PaDeeplabv3+ model using Cirrus dataset.
CONCLUSIONS: The proposed method segments the three fluids in the retina with high DSC value. Fine-tuning the networks trained on the RETOUCH dataset makes the network perform better and faster than training from scratch. Enriching the networks with inputting a variety of shapes by extracting patches helped to segment the fluids better than using a full image.
METHODS: Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel.
RESULTS: The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy.
CONCLUSIONS: Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.
METHODS: Twenty-seven participants (9 HCs, 9 patients with BD and 9 patients with BPD) matched for age, gender, ethnicity and education were recruited. Relative oxy-haemoglobin and deoxy-haemoglobin changes in the frontotemporal cortex was monitored with a 52-channel fNIRS system during a verbal fluency task (VFT). VFT performance, clinical history and symptom severity were also noted.
RESULTS: Compared to HCs, both patient groups had lower mean oxy-haemoglobin in the frontotemporal cortex during the VFT. Moreover, mean oxy-haemoglobin in the left inferior frontal region is markedly lower in patients with BPD compared to patients with BD. Task performance, clinical history and symptom severity were not associated with mean oxy-haemoglobin levels.
CONCLUSIONS: Prefrontal cortex activity is disrupted in patients with BD and BPD, but it is more extensive in BPD. These results provide further neurophysiological evidence for the separation of BPD from the bipolar spectrum. fNIRS could be a potential tool for assessing the frontal lobe function of patients who present with symptoms that are common to BD and BPD.
METHOD: This study was conducted on 19 healthy subjects (non-habitual 8; habitual 11), non-smoking and between 21 and 30 years of age. Using laser speckle flowgraphy (LSFG), three areas of optical nerve head were analyzed which are vessel, tissue, and overall, each with ten pulse waveform parameters, namely mean blur rate (MBR), fluctuation, skew, blowout score (BOS), blowout time (BOT), rising rate, falling rate, flow acceleration index (FAI), acceleration time index (ATI), and resistive index (RI). Two-way mixed ANOVA was used to determine the difference between every two groups where p
OBJECTIVE: Our study aimed to evaluate the discriminative and predictive ability of unimodal, bimodal, and multimodal approaches in a total of seven machine learning (ML) models-clinical, demographic, functional near-infrared spectroscopy (fNIRS), combinations of two unimodal models, as well as a combination of all three-for MDD.
METHODS: We recruited 65 adults with MDD and 68 matched healthy controls, who provided both sociodemographic and clinical information, and completed the HAM-D questionnaire. They were also subject to fNIRS measurement when participating in the verbal fluency task. Using the nested cross validation procedure, the classification performance of each ML model was evaluated based on the area under the receiver operating characteristic curve (ROC), balanced accuracy, sensitivity, and specificity.
RESULTS: The multimodal ML model was able to distinguish between depressed patients and healthy controls with the highest balanced accuracy of 87.98 ± 8.84% (AUC = 0.92; 95% CI (0.84-0.99) when compared with the uni- and bi-modal models.
CONCLUSIONS: Our multimodal ML model demonstrated the highest diagnostic accuracy for MDD. This reinforces the biological and clinical heterogeneity of MDD and highlights the potential of this model to improve MDD diagnosis rates. Furthermore, this model is cost-effective and clinically applicable enough to be established as a robust diagnostic system for MDD based on patients' biosignatures.