Displaying all 5 publications

Abstract:
Sort:
  1. Kolekar S, Gite S, Pradhan B, Alamri A
    Sensors (Basel), 2022 Dec 10;22(24).
    PMID: 36560047 DOI: 10.3390/s22249677
    The intelligent transportation system, especially autonomous vehicles, has seen a lot of interest among researchers owing to the tremendous work in modern artificial intelligence (AI) techniques, especially deep neural learning. As a result of increased road accidents over the last few decades, significant industries are moving to design and develop autonomous vehicles. Understanding the surrounding environment is essential for understanding the behavior of nearby vehicles to enable the safe navigation of autonomous vehicles in crowded traffic environments. Several datasets are available for autonomous vehicles focusing only on structured driving environments. To develop an intelligent vehicle that drives in real-world traffic environments, which are unstructured by nature, there should be an availability of a dataset for an autonomous vehicle that focuses on unstructured traffic environments. Indian Driving Lite dataset (IDD-Lite), focused on an unstructured driving environment, was released as an online competition in NCPPRIPG 2019. This study proposed an explainable inception-based U-Net model with Grad-CAM visualization for semantic segmentation that combines an inception-based module as an encoder for automatic extraction of features and passes to a decoder for the reconstruction of the segmentation feature map. The black-box nature of deep neural networks failed to build trust within consumers. Grad-CAM is used to interpret the deep-learning-based inception U-Net model to increase consumer trust. The proposed inception U-net with Grad-CAM model achieves 0.622 intersection over union (IoU) on the Indian Driving Dataset (IDD-Lite), outperforming the state-of-the-art (SOTA) deep neural-network-based segmentation models.
  2. Deshpande NM, Gite S, Pradhan B, Kotecha K, Alamri A
    Math Biosci Eng, 2022 Jan;19(2):1970-2001.
    PMID: 35135238 DOI: 10.3934/mbe.2022093
    The diagnosis of leukemia involves the detection of the abnormal characteristics of blood cells by a trained pathologist. Currently, this is done manually by observing the morphological characteristics of white blood cells in the microscopic images. Though there are some equipment- based and chemical-based tests available, the use and adaptation of the automated computer vision-based system is still an issue. There are certain software frameworks available in the literature; however, they are still not being adopted commercially. So there is a need for an automated and software- based framework for the detection of leukemia. In software-based detection, segmentation is the first critical stage that outputs the region of interest for further accurate diagnosis. Therefore, this paper explores an efficient and hybrid segmentation that proposes a more efficient and effective system for leukemia diagnosis. A very popular publicly available database, the acute lymphoblastic leukemia image database (ALL-IDB), is used in this research. First, the images are pre-processed and segmentation is done using Multilevel thresholding with Otsu and Kapur methods. To further optimize the segmentation performance, the Learning enthusiasm-based teaching-learning-based optimization (LebTLBO) algorithm is employed. Different metrics are used for measuring the system performance. A comparative analysis of the proposed methodology is done with existing benchmarks methods. The proposed approach has proven to be better than earlier techniques with measuring parameters of PSNR and Similarity index. The result shows a significant improvement in the performance measures with optimizing threshold algorithms and the LebTLBO technique.
  3. Khade S, Gite S, Thepade SD, Pradhan B, Alamri A
    Sensors (Basel), 2021 Nov 08;21(21).
    PMID: 34770715 DOI: 10.3390/s21217408
    Iris biometric detection provides contactless authentication, preventing the spread of COVID-19-like contagious diseases. However, these systems are prone to spoofing attacks attempted with the help of contact lenses, replayed video, and print attacks, making them vulnerable and unsafe. This paper proposes the iris liveness detection (ILD) method to mitigate spoofing attacks, taking global-level features of Thepade's sorted block truncation coding (TSBTC) and local-level features of the gray-level co-occurrence matrix (GLCM) of the iris image. Thepade's SBTC extracts global color texture content as features, and GLCM extracts local fine-texture details. The fusion of global and local content presentation may help distinguish between live and non-live iris samples. The fusion of Thepade's SBTC with GLCM features is considered in experimental validations of the proposed method. The features are used to train nine assorted machine learning classifiers, including naïve Bayes (NB), decision tree (J48), support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and ensembles (SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, J48 + RF + MLP) for ILD. Accuracy, precision, recall, and F-measure are used to evaluate the performance of the projected ILD variants. The experimentation was carried out on four standard benchmark datasets, and our proposed model showed improved results with the feature fusion approach. The proposed fusion approach gave 99.68% accuracy using the RF + J48 + MLP ensemble of classifiers, immediately followed by the RF algorithm, which gave 95.57%. The better capability of iris liveness detection will improve human-computer interaction and security in the cyber-physical space by improving person validation.
  4. Joshi A, Pradhan B, Chakraborty S, Varatharajoo R, Alamri A, Gite S, et al.
    Front Plant Sci, 2024;15:1491493.
    PMID: 39898259 DOI: 10.3389/fpls.2024.1491493
    Accurate, reliable and transparent crop yield prediction is crucial for informed decision-making by governments, farmers, and businesses regarding food security as well as agricultural business and management. Deep learning (DL) methods, particularly Long Short-Term Memory networks, have emerged as one of the most widely used architectures in yield prediction studies, providing promising results. Although other sequential DL methods like 1D Convolutional Neural Networks (1D-CNN) and Bidirectional long short-term memory (Bi-LSTM) have shown high accuracy for various tasks, including crop yield prediction, their application in regional scale crop yield prediction remains largely unexplored. Interpretability is another pressing and challenging issue in DL-based crop yield prediction, a factor that ensures the reliability of the model. Thus, this study aims to develop and implement an explainable DL model capable of accurately predicting crop yield and providing explanations for the predictions. To achieve this, we developed three state-of-the-art sequential DL models: LSTM, 1D CNN, and Bi-LSTM. We then employed three popular interpretability techniques: Local interpretable model-agnostic explanations (LIME), Integrated Gradient (IG) and Shapley Additive Explanation (SHAP) to understand the decision-making process of the models. The Bi-LSTM model outperformed other models in terms of predictive performance (R2 up to 0.88) and generalizability across locations and ranges of yield data. Explainability analysis reveals that enhanced vegetation index (EVI), temperature and precipitation at later stages of crop growth are most important in determining Winter wheat yield. Further, we demonstrated that XAI methods can also be used to understand the decision-making process of the models, to understand instances such as high- and low-yield samples, to find possible explanations for erroneous predictions, and to identify regions impacted by particular stress. By employing advanced DL techniques along with an innovative approach to explainability, this study achieves highly accurate yield prediction while providing intuitive insights into the model's decision-making process.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links