METHODS: We take advantage of improved contrast seen on magnetic resonance (MR) images of patients with acute and early subacute SICH and introduce an automated algorithm for haematoma and oedema segmentation from these images. To our knowledge, there is no previously proposed segmentation technique for SICH that utilises MR images directly. The method is based on shape and intensity analysis for haematoma segmentation and voxel-wise dynamic thresholding of hyper-intensities for oedema segmentation.
RESULTS: Using Dice scores to measure segmentation overlaps between labellings yielded by the proposed algorithm and five different expert raters on 18 patients, we observe that our technique achieves overlap scores that are very similar to those obtained by pairwise expert rater comparison. A further comparison between the proposed method and a state-of-the-art Deep Learning segmentation on a separate set of 32 manually annotated subjects confirms the proposed method can achieve comparable results with very mild computational burden and in a completely training-free and unsupervised way.
CONCLUSION: Our technique can be a computationally light and effective way to automatically delineate haematoma and oedema extent directly from MR images. Thus, with increasing use of MR images clinically after intracerebral haemorrhage this technique has the potential to inform clinical practice in the future.
METHODS: In this work, we present a bit-plane slicing (BPS) and local binary pattern (LBP) based novel approach for glaucoma diagnosis. Firstly, our approach separates the red (R), green (G), and blue (B) channels from the input color fundus image and splits the channels into bit planes. Secondly, we extract LBP based statistical features from each of the bit planes of the individual channels. Thirdly, these features from the individual channels are fed separately to three different support vector machines (SVMs) for classification. Finally, the decisions from the individual SVMs are fused at the decision level to classify the input fundus image into normal or glaucoma class.
RESULTS: Our experimental results suggest that the proposed approach is effective in discriminating normal and glaucoma cases with an accuracy of 99.30% using 10-fold cross validation.
CONCLUSIONS: The developed system is ready to be tested on large and diverse databases and can assist the ophthalmologists in their daily screening to confirm their diagnosis, thereby increasing accuracy of diagnosis.