Displaying all 6 publications

Abstract:
Sort:
  1. Zeki A, Abubakar A, Chiroma H
    Springerplus, 2016;5(1):868.
    PMID: 27386317 DOI: 10.1186/s40064-016-2371-6
    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.
  2. Saadi Y, Yanto IT, Herawan T, Balakrishnan V, Chiroma H, Risnumawan A
    PLoS One, 2016;11(1):e0144371.
    PMID: 26790131 DOI: 10.1371/journal.pone.0144371
    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.
  3. Rehman MZ, Zamli KZ, Almutairi M, Chiroma H, Aamir M, Kader MA, et al.
    PLoS One, 2021;16(12):e0259786.
    PMID: 34855771 DOI: 10.1371/journal.pone.0259786
    Team formation (TF) in social networks exploits graphs (i.e., vertices = experts and edges = skills) to represent a possible collaboration between the experts. These networks lead us towards building cost-effective research teams irrespective of the geolocation of the experts and the size of the dataset. Previously, large datasets were not closely inspected for the large-scale distributions & relationships among the researchers, resulting in the algorithms failing to scale well on the data. Therefore, this paper presents a novel TF algorithm for expert team formation called SSR-TF based on two metrics; communication cost and graph reduction, that will become a basis for future TF's. In SSR-TF, communication cost finds the possibility of collaboration between researchers. The graph reduction scales the large data to only appropriate skills and the experts, resulting in real-time extraction of experts for collaboration. This approach is tested on five organic and benchmark datasets, i.e., UMP, DBLP, ACM, IMDB, and Bibsonomy. The SSR-TF algorithm is able to build cost-effective teams with the most appropriate experts-resulting in the formation of more communicative teams with high expertise levels.
  4. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
  5. Chiroma H, Ezugwu AE, Jauro F, Al-Garadi MA, Abdullahi IN, Shuib L
    PeerJ Comput Sci, 2020;6:e313.
    PMID: 33816964 DOI: 10.7717/peerj-cs.313
    Background and Objective: The COVID-19 pandemic has caused severe mortality across the globe, with the USA as the current epicenter of the COVID-19 epidemic even though the initial outbreak was in Wuhan, China. Many studies successfully applied machine learning to fight COVID-19 pandemic from a different perspective. To the best of the authors' knowledge, no comprehensive survey with bibliometric analysis has been conducted yet on the adoption of machine learning to fight COVID-19. Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine learning-based technologies to fight COVID-19 pandemic from a different perspective, including an extensive systematic literature review and bibliometric analysis.

    Methods: We applied a literature survey methodology to retrieved data from academic databases and subsequently employed a bibliometric technique to analyze the accessed records. Besides, the concise summary, sources of COVID-19 datasets, taxonomy, synthesis and analysis are presented in this study. It was found that the Convolutional Neural Network (CNN) is mainly utilized in developing COVID-19 diagnosis and prognosis tools, mostly from chest X-ray and chest CT scan images. Similarly, in this study, we performed a bibliometric analysis of machine learning-based COVID-19 related publications in the Scopus and Web of Science citation indexes. Finally, we propose a new perspective for solving the challenges identified as direction for future research. We believe the survey with bibliometric analysis can help researchers easily detect areas that require further development and identify potential collaborators.

    Results: The findings of the analysis presented in this article reveal that machine learning-based COVID-19 diagnose tools received the most considerable attention from researchers. Specifically, the analyses of results show that energy and resources are more dispenses towards COVID-19 automated diagnose tools while COVID-19 drugs and vaccine development remains grossly underexploited. Besides, the machine learning-based algorithm that is predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images.

    Conclusions: The challenges hindering practical work on the application of machine learning-based technologies to fight COVID-19 and new perspective to solve the identified problems are presented in this article. Furthermore, we believed that the presented survey with bibliometric analysis could make it easier for researchers to identify areas that need further development and possibly identify potential collaborators at author, country and institutional level, with the overall aim of furthering research in the focused area of machine learning application to disease control.

  6. Chiroma H, Abdul-kareem S, Khan A, Nawi NM, Gital AY, Shuib L, et al.
    PLoS One, 2015;10(8):e0136140.
    PMID: 26305483 DOI: 10.1371/journal.pone.0136140
    Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links