The intelligent transportation system, especially autonomous vehicles, has seen a lot of interest among researchers owing to the tremendous work in modern artificial intelligence (AI) techniques, especially deep neural learning. As a result of increased road accidents over the last few decades, significant industries are moving to design and develop autonomous vehicles. Understanding the surrounding environment is essential for understanding the behavior of nearby vehicles to enable the safe navigation of autonomous vehicles in crowded traffic environments. Several datasets are available for autonomous vehicles focusing only on structured driving environments. To develop an intelligent vehicle that drives in real-world traffic environments, which are unstructured by nature, there should be an availability of a dataset for an autonomous vehicle that focuses on unstructured traffic environments. Indian Driving Lite dataset (IDD-Lite), focused on an unstructured driving environment, was released as an online competition in NCPPRIPG 2019. This study proposed an explainable inception-based U-Net model with Grad-CAM visualization for semantic segmentation that combines an inception-based module as an encoder for automatic extraction of features and passes to a decoder for the reconstruction of the segmentation feature map. The black-box nature of deep neural networks failed to build trust within consumers. Grad-CAM is used to interpret the deep-learning-based inception U-Net model to increase consumer trust. The proposed inception U-net with Grad-CAM model achieves 0.622 intersection over union (IoU) on the Indian Driving Dataset (IDD-Lite), outperforming the state-of-the-art (SOTA) deep neural-network-based segmentation models.
The diagnosis of leukemia involves the detection of the abnormal characteristics of blood cells by a trained pathologist. Currently, this is done manually by observing the morphological characteristics of white blood cells in the microscopic images. Though there are some equipment- based and chemical-based tests available, the use and adaptation of the automated computer vision-based system is still an issue. There are certain software frameworks available in the literature; however, they are still not being adopted commercially. So there is a need for an automated and software- based framework for the detection of leukemia. In software-based detection, segmentation is the first critical stage that outputs the region of interest for further accurate diagnosis. Therefore, this paper explores an efficient and hybrid segmentation that proposes a more efficient and effective system for leukemia diagnosis. A very popular publicly available database, the acute lymphoblastic leukemia image database (ALL-IDB), is used in this research. First, the images are pre-processed and segmentation is done using Multilevel thresholding with Otsu and Kapur methods. To further optimize the segmentation performance, the Learning enthusiasm-based teaching-learning-based optimization (LebTLBO) algorithm is employed. Different metrics are used for measuring the system performance. A comparative analysis of the proposed methodology is done with existing benchmarks methods. The proposed approach has proven to be better than earlier techniques with measuring parameters of PSNR and Similarity index. The result shows a significant improvement in the performance measures with optimizing threshold algorithms and the LebTLBO technique.
Iris biometric detection provides contactless authentication, preventing the spread of COVID-19-like contagious diseases. However, these systems are prone to spoofing attacks attempted with the help of contact lenses, replayed video, and print attacks, making them vulnerable and unsafe. This paper proposes the iris liveness detection (ILD) method to mitigate spoofing attacks, taking global-level features of Thepade's sorted block truncation coding (TSBTC) and local-level features of the gray-level co-occurrence matrix (GLCM) of the iris image. Thepade's SBTC extracts global color texture content as features, and GLCM extracts local fine-texture details. The fusion of global and local content presentation may help distinguish between live and non-live iris samples. The fusion of Thepade's SBTC with GLCM features is considered in experimental validations of the proposed method. The features are used to train nine assorted machine learning classifiers, including naïve Bayes (NB), decision tree (J48), support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and ensembles (SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, J48 + RF + MLP) for ILD. Accuracy, precision, recall, and F-measure are used to evaluate the performance of the projected ILD variants. The experimentation was carried out on four standard benchmark datasets, and our proposed model showed improved results with the feature fusion approach. The proposed fusion approach gave 99.68% accuracy using the RF + J48 + MLP ensemble of classifiers, immediately followed by the RF algorithm, which gave 95.57%. The better capability of iris liveness detection will improve human-computer interaction and security in the cyber-physical space by improving person validation.