METHODS: In our paper, we propose a real-time, lightweight liver segmentation model named G-MBRMD. Specifically, we employ a Transformer-based complex model as the teacher and a convolution-based lightweight model as the student. By introducing proposed multi-head mapping and boundary reconstruction strategies during the knowledge distillation process, Our method effectively guides the student model to gradually comprehend and master the global boundary processing capabilities of the complex teacher model, significantly enhancing the student model's segmentation performance without adding any computational complexity.
RESULTS: On the LITS dataset, we conducted rigorous comparative and ablation experiments, four key metrics were used for evaluation, including model size, inference speed, Dice coefficient, and HD95. Compared to other methods, our proposed model achieved an average Dice coefficient of 90.14±16.78%, with only 0.6 MB memory and 0.095 s inference speed for a single image on a standard CPU. Importantly, this approach improved the average Dice coefficient of the baseline student model by 1.64% without increasing computational complexity.
CONCLUSION: The results demonstrate that our method successfully realizes the unification of segmentation precision and lightness, and greatly enhances its potential for widespread application in practical settings.
OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.
METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.
RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.
CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.
METHODS: To overcome these limitations, our research introduces the ALLDet classifier, an innovative tool employing deep transfer learning for the automated analysis and categorization of ALL from White Blood Cell (WBC) nuclei images. Our investigation encompassed the evaluation of nine state-of-the-art pre-trained convolutional neural network (CNN) models, namely VGG16, VGG19, ResNet50, ResNet101, DenseNet121, DenseNet201, Xception, MobileNet, and EfficientNetB3. We augmented this approach by incorporating a sophisticated contour-based segmentation technique, derived from the Chan-Vese model, aimed at the meticulous segmentation of blast cell nuclei in blood smear images, thereby enhancing the accuracy of our analysis.
RESULTS: The empirical assessment of these methodologies underscored the superior performance of the EfficientNetB3 model, which demonstrated exceptional metrics: a recall specificity of 98.5%, precision of 95.86%, F1-score of 97.16%, and an overall accuracy rate of 97.13%. The Chan-Vese model's adaptability to the irregular shapes of blast cells and its noise-resistant segmentation capability were key to capturing the complex morphological changes essential for accurate segmentation.
CONCLUSION: The combined application of the ALLDet classifier, powered by EfficientNetB3, with our advanced segmentation approach, emerges as a formidable advancement in the early detection and accurate diagnosis of ALL. This breakthrough not only signifies a pivotal leap in leukemia diagnostic methodologies but also holds the promise of significantly elevating the standards of patient care through the provision of timely and precise diagnoses. The implications of this study extend beyond immediate clinical utility, paving the way for future research to further refine and enhance the capabilities of artificial intelligence in medical diagnostics.