Displaying publications 1 - 20 of 135 in total

Abstract:
Sort:
  1. ASSUNTA MALAR PATRICK VINCENT, HASSILAH SALLEH
    MyJurnal
    A wide range of studies have been conducted on deep learning to forecast time series data. However, very few researches have discussed the optimal number of hidden layers and nodes in each hidden layer of the architecture. It is crucial to study the number of hidden layers and nodes in each hidden layer as it controls the performance of the architecture. Apart from that, in the presence of the activation function, diverse computation between the hidden layers and output layer can take place. Therefore, in this study, the multilayer perceptron (MLP) architecture is developed using the Python software to forecast time series data. Then, the developed architecture is applied on the Apple Inc. stock price due to its volatile characteristic. Using historical prices, the accuracy of the forecast is measured by the different activation functions, number of hidden layers and size of data. The Keras deep learning library, which can be found in the Python software, is used to develop the MLP architecture to forecast the Apple Inc. stock price. The developed model is then applied on different cases, namely different sizes of data, different activation functions, different numbers of hidden layers of up to nine layers, and different numbers of nodes in each hidden layer. Then, the metrics mean squared error (MSE), mean absolute error (MAE) and root-mean-square error (RMSE) are employed to test the accuracy of the forecast. It is found that the architecture with rectified linear unit (ReLU) outperformed in every hidden layer and each case with the highest accuracy. To conclude, the optimal number of hidden layers differs in every case as there are other influencing factors.
    Matched MeSH terms: Benchmarking
  2. Abdul-Rahman H, Berawi MA
    Qual Assur, 2001;9(1):5-30.
    PMID: 12465710
    Knowledge Management (KM) addresses the critical issues of organizational adoption, survival and competence in the face of an increasingly changing environment. KM embodies organizational processes that seek a synergistic combination of the data and information processing capabilities of information and communication technologies (ICT), and the creative and innovative capacity of human beings to improve ICT In that role, knowledge management will improve quality management and avoid or minimize losses and weakness that usually come from poor performance as well as increase the competitive level of the company and its ability to survive in the global marketplace. To achieve quality, all parties including the clients, company consultants, contractors, entrepreneurs, suppliers, and the governing bodies (i.e., all involved stake-holders) need to collaborate and commit to achieving quality. The design based organizations in major business and construction companies have to be quality driven to support healthy growth in today's competitive market. In the march towards vision 2020 and globalization (i.e., the one world community) of many companies, their design based organizations need to have superior quality management and knowledge management to anticipate changes. The implementation of a quality system such as the ISO 9000 Standards, Total Quality Management, or Quality Function Deployment (QFD) focuses the company's resources towards achieving faster and better results in the global market with less cost. To anticipate the needs of the marketplace and clients as the world and technology change, a new system, which we call Power Quality System (PQS), has been designed. PQS is a combination of information and communication technologies (ICT) and the creative and innovative capacity of human beings to meet the challenges of the new world business and to develop high quality products.
    Publication year= 2001 Jan-2002 Mar
    Matched MeSH terms: Benchmarking
  3. Adam MS, Por LY, Hussain MR, Khan N, Ang TF, Anisi MH, et al.
    Sensors (Basel), 2019 Aug 29;19(17).
    PMID: 31470520 DOI: 10.3390/s19173732
    Many receiver-based Preamble Sampling Medium Access Control (PS-MAC) protocols have been proposed to provide better performance for variable traffic in a wireless sensor network (WSN). However, most of these protocols cannot prevent the occurrence of incorrect traffic convergence that causes the receiver node to wake-up more frequently than the transmitter node. In this research, a new protocol is proposed to prevent the problem mentioned above. The proposed mechanism has four components, and they are Initial control frame message, traffic estimation function, control frame message, and adaptive function. The initial control frame message is used to initiate the message transmission by the receiver node. The traffic estimation function is proposed to reduce the wake-up frequency of the receiver node by using the proposed traffic status register (TSR), idle listening times (ILTn, ILTk), and "number of wake-up without receiving beacon message" (NWwbm). The control frame message aims to supply the essential information to the receiver node to get the next wake-up-interval (WUI) time for the transmitter node using the proposed adaptive function. The proposed adaptive function is used by the receiver node to calculate the next WUI time of each of the transmitter nodes. Several simulations are conducted based on the benchmark protocols. The outcome of the simulation indicates that the proposed mechanism can prevent the incorrect traffic convergence problem that causes frequent wake-up of the receiver node compared to the transmitter node. Moreover, the simulation results also indicate that the proposed mechanism could reduce energy consumption, produce minor latency, improve the throughput, and produce higher packet delivery ratio compared to other related works.
    Matched MeSH terms: Benchmarking
  4. Ahmed A, Adam M, Ghafar NA, Muhammad M, Ebrahim NA
    Iran J Public Health, 2016 Sep;45(9):1118-1125.
    PMID: 27957456
    BACKGROUND: Citation metrics and total publications in a field has become the gold standard for rating researchers and viability of a field. Hence, stimulating demand for citation has led to a search for useful strategies to improve performance metric index. Meanwhile, title, abstract and morphologic qualities of the articles attract researchers to scientific publications. Yet, there is relatively little understanding of the citation trend in disability related fields. We aimed to provide an insight into the factors associated with citation increase in this field. Additionally, we tried to know at what page number an article might appear attractive to disability researchers needs. Thus, our focus is placed on the article page count and the number of authors contributing to the fields per article.

    METHODS: To this end, we evaluated the quantitative characteristics of top cited articles in the fields with a total citation (≥50) in the Web of Science (WoS) database. Using one-way independent ANOVA, data extracted spanning a period of 1980-2015 were analyzed, while the non-parametric data analysis uses Kruskal-Walis test.

    RESULTS: Articles with 11 to 20 pages attract more citations followed by those within the range of zero to 10. Articles with upward 21 pages are the least cited. Surprisingly, articles with more than two authors are significantly (P<0.05) less cited and the citation decreases as the number of authors increased.

    CONCLUSION: Collaborative studies enjoy wider utilization and more citation, yet discounted merit of additional pages and limited collaborative research in disability field is revealed in this study.

    Matched MeSH terms: Benchmarking
  5. Al-Bashiri H, Abdulgabber MA, Romli A, Kahtan H
    PLoS One, 2018;13(10):e0204434.
    PMID: 30286123 DOI: 10.1371/journal.pone.0204434
    This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics.
    Matched MeSH terms: Benchmarking
  6. Al-Dabbagh MM, Salim N, Himmat M, Ahmed A, Saeed F
    Molecules, 2015;20(10):18107-27.
    PMID: 26445039 DOI: 10.3390/molecules201018107
    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.
    Matched MeSH terms: Benchmarking
  7. Al-Hadi IAA, Sharef NM, Sulaiman MN, Mustapha N, Nilashi M
    PeerJ Comput Sci, 2020;6:e331.
    PMID: 33816980 DOI: 10.7717/peerj-cs.331
    Recommendation systems suggest peculiar products to customers based on their past ratings, preferences, and interests. These systems typically utilize collaborative filtering (CF) to analyze customers' ratings for products within the rating matrix. CF suffers from the sparsity problem because a large number of rating grades are not accurately determined. Various prediction approaches have been used to solve this problem by learning its latent and temporal factors. A few other challenges such as latent feedback learning, customers' drifting interests, overfitting, and the popularity decay of products over time have also been addressed. Existing works have typically deployed either short or long temporal representation for addressing the recommendation system issues. Although each effort improves on the accuracy of its respective benchmark, an integrative solution that could address all the problems without trading off its accuracy is needed. Thus, this paper presents a Latent-based Temporal Optimization (LTO) approach to improve the prediction accuracy of CF by learning the past attitudes of users and their interests over time. Experimental results show that the LTO approach efficiently improves the prediction accuracy of CF compared to the benchmark schemes.
    Matched MeSH terms: Benchmarking
  8. Al-Khaleefa AS, Ahmad MR, Isa AAM, Esa MRM, Aljeroudi Y, Jubair MA, et al.
    Sensors (Basel), 2019 May 25;19(10).
    PMID: 31130657 DOI: 10.3390/s19102397
    Wi-Fi has shown enormous potential for indoor localization because of its wide utilization and availability. Enabling the use of Wi-Fi for indoor localization necessitates the construction of a fingerprint and the adoption of a learning algorithm. The goal is to enable the use of the fingerprint in training the classifiers for predicting locations. Existing models of machine learning Wi-Fi-based localization are brought from machine learning and modified to accommodate for practical aspects that occur in indoor localization. The performance of these models varies depending on their effectiveness in handling and/or considering specific characteristics and the nature of indoor localization behavior. One common behavior in the indoor navigation of people is its cyclic dynamic nature. To the best of our knowledge, no existing machine learning model for Wi-Fi indoor localization exploits cyclic dynamic behavior for improving localization prediction. This study modifies the widely popular online sequential extreme learning machine (OSELM) to exploit cyclic dynamic behavior for achieving improved localization results. Our new model is called knowledge preserving OSELM (KP-OSELM). Experimental results conducted on the two popular datasets TampereU and UJIndoorLoc conclude that KP-OSELM outperforms benchmark models in terms of accuracy and stability. The last achieved accuracy was 92.74% for TampereU and 72.99% for UJIndoorLoc.
    Matched MeSH terms: Benchmarking
  9. Al-Qazzaz NK, Hamid Bin Mohd Ali S, Ahmad SA, Islam MS, Escudero J
    Sensors (Basel), 2017 Jun 08;17(6).
    PMID: 28594352 DOI: 10.3390/s17061326
    Characterizing dementia is a global challenge in supporting personalized health care. The electroencephalogram (EEG) is a promising tool to support the diagnosis and evaluation of abnormalities in the human brain. The EEG sensors record the brain activity directly with excellent time resolution. In this study, EEG sensor with 19 electrodes were used to test the background activities of the brains of five vascular dementia (VaD), 15 stroke-related patients with mild cognitive impairment (MCI), and 15 healthy subjects during a working memory (WM) task. The objective of this study is twofold. First, it aims to enhance the recorded EEG signals using a novel technique that combines automatic independent component analysis (AICA) and wavelet transform (WT), that is, the AICA-WT technique; second, it aims to extract and investigate the spectral features that characterize the post-stroke dementia patients compared to the control subjects. The proposed AICA-WT technique is a four-stage approach. In the first stage, the independent components (ICs) were estimated. In the second stage, three-step artifact identification metrics were applied to detect the artifactual components. The components identified as artifacts were marked as critical and denoised through DWT in the third stage. In the fourth stage, the corrected ICs were reconstructed to obtain artifact-free EEG signals. The performance of the proposed AICA-WT technique was compared with those of two other techniques based on AICA and WT denoising methods using cross-correlation X C o r r and peak signal to noise ratio ( P S N R ) (ANOVA, p ˂ 0.05). The AICA-WT technique exhibited the best artifact removal performance. The assumption that there would be a deceleration of EEG dominant frequencies in VaD and MCI patients compared with control subjects was assessed with AICA-WT (ANOVA, p ˂ 0.05). Therefore, this study may provide information on post-stroke dementia particularly VaD and stroke-related MCI patients through spectral analysis of EEG background activities that can help to provide useful diagnostic indexes by using EEG signal processing.
    Matched MeSH terms: Benchmarking
  10. Albahri OS, Zaidan AA, Albahri AS, Zaidan BB, Abdulkareem KH, Al-Qaysi ZT, et al.
    J Infect Public Health, 2020 Oct;13(10):1381-1396.
    PMID: 32646771 DOI: 10.1016/j.jiph.2020.06.028
    This study presents a systematic review of artificial intelligence (AI) techniques used in the detection and classification of coronavirus disease 2019 (COVID-19) medical images in terms of evaluation and benchmarking. Five reliable databases, namely, IEEE Xplore, Web of Science, PubMed, ScienceDirect and Scopus were used to obtain relevant studies of the given topic. Several filtering and scanning stages were performed according to the inclusion/exclusion criteria to screen the 36 studies obtained; however, only 11 studies met the criteria. Taxonomy was performed, and the 11 studies were classified on the basis of two categories, namely, review and research studies. Then, a deep analysis and critical review were performed to highlight the challenges and critical gaps outlined in the academic literature of the given subject. Results showed that no relevant study evaluated and benchmarked AI techniques utilised in classification tasks (i.e. binary, multi-class, multi-labelled and hierarchical classifications) of COVID-19 medical images. In case evaluation and benchmarking will be conducted, three future challenges will be encountered, namely, multiple evaluation criteria within each classification task, trade-off amongst criteria and importance of these criteria. According to the discussed future challenges, the process of evaluation and benchmarking AI techniques used in the classification of COVID-19 medical images considered multi-complex attribute problems. Thus, adopting multi-criteria decision analysis (MCDA) is an essential and effective approach to tackle the problem complexity. Moreover, this study proposes a detailed methodology for the evaluation and benchmarking of AI techniques used in all classification tasks of COVID-19 medical images as future directions; such methodology is presented on the basis of three sequential phases. Firstly, the identification procedure for the construction of four decision matrices, namely, binary, multi-class, multi-labelled and hierarchical, is presented on the basis of the intersection of evaluation criteria of each classification task and AI classification techniques. Secondly, the development of the MCDA approach for benchmarking AI classification techniques is provided on the basis of the integrated analytic hierarchy process and VlseKriterijumska Optimizacija I Kompromisno Resenje methods. Lastly, objective and subjective validation procedures are described to validate the proposed benchmarking solutions.
    Matched MeSH terms: Benchmarking*
  11. Alex Kim RJ, Chin ZH, Sharlyn P, Priscilla B, Josephine S
    Med J Malaysia, 2019 Oct;74(5):385-388.
    PMID: 31649213
    INTRODUCTION: Patient safety is defined as 'the prevention of harm caused by errors of commission and omission'. Patient safety culture is one of the important determining factor in safety and quality in healthcare. The purpose of this study is to assess the views and perceptions of health care professionals about patient safety culture in Sarawak General Hospital (SGH).

    METHODS: A cross-sectional study, using the 'Hospital Survey on Patient Safety Culture (HSOPSC)' questionnaire was carried out in 2018 in SGH. Random sampling was used to select a wide range of staff in SGH. A self-administered questionnaire was distributed to 500 hospital staff consisting of doctors, nurses, pharmacist and other clinical and non-clinical staff, conducted from March to April 2018. A total of 407 respondents successfully completed the questionnaire. Therefore, the final response rate for the survey was 81.4%. This study used SPSS 22.0 for Windows and Hospital Data Entry and Analysis Tool that works with Microsoft Excel developed by United States Agency for Healthcare Research and Quality (AHRQ) to perform statistical analysis on the survey data.

    RESULTS: Majority of the respondents graded the overall patient safety as acceptable (63.1%) while only 3.4% graded as excellent. The overall patient safety score was 50.1% and most of the scores related to dimensions were lower than the benchmark scores (64.8%). Generally, the mean positive response rate for all the dimensions were lower than composite data of AHRQ, except for "Organizational Learning - Continuous Improvement", which is also the highest positive response rate (80%), higher than AHRQ data (73%). The result showed that SGH has a good opportunity to improve over time as it gains experience and accumulates knowledge. On the other hand, the lowest percentage of positive responses was "Non-punitive response to error" (18%), meaning that most of the staff perceived that they will be punished for medical error.

    CONCLUSIONS: The level of patient safety culture in SGH is acceptable and most of the scores related to dimensions were lower than benchmark score. SGH as a learning organisation should also address the issues of staffing, improving handoff and transition and develop a non-punitive culture in response to error.

    Matched MeSH terms: Benchmarking
  12. Ali GA, Abubakar H, Alzaeemi SAS, Almawgani AHM, Sulaiman A, Tay KG
    PLoS One, 2023;18(9):e0286874.
    PMID: 37747876 DOI: 10.1371/journal.pone.0286874
    This study proposes a novel hybrid computational approach that integrates the artificial dragonfly algorithm (ADA) with the Hopfield neural network (HNN) to achieve an optimal representation of the Exact Boolean kSatisfiability (EBkSAT) logical rule. The primary objective is to investigate the effectiveness and robustness of the ADA algorithm in expediting the training phase of the HNN to attain an optimized EBkSAT logic representation. To assess the performance of the proposed hybrid computational model, a specific Exact Boolean kSatisfiability problem is constructed, and simulated data sets are generated. The evaluation metrics employed include the global minimum ratio (GmR), root mean square error (RMSE), mean absolute percentage error (MAPE), and network computational time (CT) for EBkSAT representation. Comparative analyses are conducted between the results obtained from the proposed model and existing models in the literature. The findings demonstrate that the proposed hybrid model, ADA-HNN-EBkSAT, surpasses existing models in terms of accuracy and computational time. This suggests that the ADA algorithm exhibits effective compatibility with the HNN for achieving an optimal representation of the EBkSAT logical rule. These outcomes carry significant implications for addressing intricate optimization problems across diverse domains, including computer science, engineering, and business.
    Matched MeSH terms: Benchmarking
  13. Alnoor A, Chew X, Khaw KW, Muhsen YR, Sadaa AM
    Environ Sci Pollut Res Int, 2024 Jan;31(4):5762-5783.
    PMID: 38133762 DOI: 10.1007/s11356-023-31645-8
    Greenhouse gas emissions and global warming are recent issues of upward trend. This study sought to underline the causal relationships between engagement modes with green technology, environmental, social, and governance (ESG) ratio, and circular economy. Our investigation also captured benchmarking of energy companies' circular economy behaviors. A hybrid-stage partial least squares structural equation modeling (PLS-SEM) and multi-criteria decision-making (MCDM) analysis have been adopted. This study collected 713 questionnaires from heads of departments and managers of energy companies. The findings of this study claimed that engagement modes with green technology affect the circular economy and sustainability. The findings revealed that ESG ratings have a mediating role in the nexus among engagement modes with green technology and circular economy. The results of the MCDM application revealed the identification of the best and worst energy companies of circular economy behaviours. This study is exceptional because it is among the first to address the issues of greenhouse gas emissions by providing decisive evidence about the level of circular economy behaviors in energy companies.
    Matched MeSH terms: Benchmarking*
  14. Alsalem MA, Zaidan AA, Zaidan BB, Hashim M, Albahri OS, Albahri AS, et al.
    J Med Syst, 2018 Sep 19;42(11):204.
    PMID: 30232632 DOI: 10.1007/s10916-018-1064-9
    This study aims to systematically review prior research on the evaluation and benchmarking of automated acute leukaemia classification tasks. The review depends on three reliable search engines: ScienceDirect, Web of Science and IEEE Xplore. A research taxonomy developed for the review considers a wide perspective for automated detection and classification of acute leukaemia research and reflects the usage trends in the evaluation criteria in this field. The developed taxonomy consists of three main research directions in this domain. The taxonomy involves two phases. The first phase includes all three research directions. The second one demonstrates all the criteria used for evaluating acute leukaemia classification. The final set of studies includes 83 investigations, most of which focused on enhancing the accuracy and performance of detection and classification through proposed methods or systems. Few efforts were made to undertake the evaluation issues. According to the final set of articles, three groups of articles represented the main research directions in this domain: 56 articles highlighted the proposed methods, 22 articles involved proposals for system development and 5 papers centred on evaluation and comparison. The other taxonomy side included 16 main and sub-evaluation and benchmarking criteria. This review highlights three serious issues in the evaluation and benchmarking of multiclass classification of acute leukaemia, namely, conflicting criteria, evaluation criteria and criteria importance. It also determines the weakness of benchmarking tools. To solve these issues, multicriteria decision-making (MCDM) analysis techniques were proposed as effective recommended solutions in the methodological aspect. This methodological aspect involves a proposed decision support system based on MCDM for evaluation and benchmarking to select suitable multiclass classification models for acute leukaemia. The said support system is examined and has three sequential phases. Phase One presents the identification procedure and process for establishing a decision matrix based on a crossover of evaluation criteria and acute leukaemia multiclass classification models. Phase Two describes the decision matrix development for the selection of acute leukaemia classification models based on the integrated Best and worst method (BWM) and VIKOR. Phase Three entails the validation of the proposed system.
    Matched MeSH terms: Benchmarking*
  15. Angers-Loustau A, Petrillo M, Bengtsson-Palme J, Berendonk T, Blais B, Chan KG, et al.
    F1000Res, 2018;7.
    PMID: 30026930 DOI: 10.12688/f1000research.14509.2
    Next-Generation Sequencing (NGS) technologies are expected to play a crucial role in the surveillance of infectious diseases, with their unprecedented capabilities for the characterisation of genetic information underlying the virulence and antimicrobial resistance (AMR) properties of microorganisms.  In the implementation of any novel technology for regulatory purposes, important considerations such as harmonisation, validation and quality assurance need to be addressed.  NGS technologies pose unique challenges in these regards, in part due to their reliance on bioinformatics for the processing and proper interpretation of the data produced.  Well-designed benchmark resources are thus needed to evaluate, validate and ensure continued quality control over the bioinformatics component of the process.  This concept was explored as part of a workshop on "Next-generation sequencing technologies and antimicrobial resistance" held October 4-5 2017.   Challenges involved in the development of such a benchmark resource, with a specific focus on identifying the molecular determinants of AMR, were identified. For each of the challenges, sets of unsolved questions that will need to be tackled for them to be properly addressed were compiled. These take into consideration the requirement for monitoring of AMR bacteria in humans, animals, food and the environment, which is aligned with the principles of a "One Health" approach.
    Matched MeSH terms: Benchmarking
  16. Anuar, I., Zahedi, F., Kadir, A., Mokhtar, A.B.
    MyJurnal
    Background: The occupationally acquired accident and injuries in Malaysian medical laboratories are still largely unexplored prior to this survey. Some of these questions are attempted in this survey and act as source of reference for the number and accident injuries in medical laboratories in the area of Klang Valley and also in Malaysia.
    Methods : This survey was carried out based on recordable cases throughout the calendar year of 2001 to 2005 from 3 main medical laboratories of Hospital Kuala Lumpur (HKL), Hospital Universiti Kebangsaan Malaysia (HUKM) and Pusat Perubatan Universiti Malaya (PPUM).
    Results : The average annual incident rate for this three medical laboratories is 2.05/100 full time equivalent (FTE) employees. The annual incident rate in individual medical laboratory is 2.04/100 FTE (HKL), 2.07/100 FTE (HUKM) and 2.04/100 FTE (PPUM) employees, respectively. The most common injury that is 25.3% of the total cases reported was due to cuts by sharp objects and the second most common injury was exposure to biohazard and chemical substances which constitutes 19.9% of the total cases. . Needle prick injury (16.8%), fire (8.4%), fall/slip (6.3%) and gases leak and locked in cold room were reported as one case each.
    Conclusion : The average incident rate from this study is remarkably similar compared with the incident injury rate reported by BLS (2006) which is 2.1/100 FTE in the average size of medical laboratory and diagnostic. Besides this incident rate of injury and illness as a comparison, it also can be used as a benchmark to evaluate the safety performance among medical laboratories in Malaysia.
    Matched MeSH terms: Benchmarking
  17. Anwar M, Abdullah AH, Altameem A, Qureshi KN, Masud F, Faheem M, et al.
    Sensors (Basel), 2018 Sep 26;18(10).
    PMID: 30261628 DOI: 10.3390/s18103237
    Recent technological advancement in wireless communication has led to the invention of wireless body area networks (WBANs), a cutting-edge technology in healthcare applications. WBANs interconnect with intelligent and miniaturized biomedical sensor nodes placed on human body to an unattended monitoring of physiological parameters of the patient. These sensors are equipped with limited resources in terms of computation, storage, and battery power. The data communication in WBANs is a resource hungry process, especially in terms of energy. One of the most significant challenges in this network is to design energy efficient next-hop node selection framework. Therefore, this paper presents a green communication framework focusing on an energy aware link efficient routing approach for WBANs (ELR-W). Firstly, a link efficiency-oriented network model is presented considering beaconing information and network initialization process. Secondly, a path cost calculation model is derived focusing on energy aware link efficiency. A complete operational framework ELR-W is developed considering energy aware next-hop link selection by utilizing the network and path cost model. The comparative performance evaluation attests the energy-oriented benefit of the proposed framework as compared to the state-of-the-art techniques. It reveals a significant enhancement in body area networking in terms of various energy-oriented metrics under medical environments.
    Matched MeSH terms: Benchmarking
  18. Arasteh-Rad, H., Khairulmizam Samsudin, Abdul Rahman Ramli, Mohammad Ali Tavallaie
    MyJurnal
    The rapid development of roads and the increasing number of vehicles have complicated road traffic enforcement in many countries due to limited resources of the traffic police, specifically when traffic infraction registration is done manually. The efficiency of the traffic police can be improved by a computer-based method. This study focused on mobile traffic infraction registration system benchmarking which is used to evaluate the server performance under load. The study attempts to provide a clear guideline for the performance evaluation of mobile road traffic infraction registration system, whereby the traffic police can make decision based on them to migrate from the manual-method toward computer-based method. A closed form of benchmark tool was used for the evaluation of the system performance. The tool was configured to imitate ramp scenarios, and statistics were gathered. The server was monitored at different times and works. Contributing factors include bottleneck, traffic, and response time, which are related with criteria and measurements. The system resource was also monitored for the tests.
    Matched MeSH terms: Benchmarking
  19. Asghar MA, Khan MJ, Rizwan M, Shorfuzzaman M, Mehmood RM
    Multimed Syst, 2021 Apr 21.
    PMID: 33897112 DOI: 10.1007/s00530-021-00782-w
    Classification of human emotions based on electroencephalography (EEG) is a very popular topic nowadays in the provision of human health care and well-being. Fast and effective emotion recognition can play an important role in understanding a patient's emotions and in monitoring stress levels in real-time. Due to the noisy and non-linear nature of the EEG signal, it is still difficult to understand emotions and can generate large feature vectors. In this article, we have proposed an efficient spatial feature extraction and feature selection method with a short processing time. The raw EEG signal is first divided into a smaller set of eigenmode functions called (IMF) using the empirical model-based decomposition proposed in our work, known as intensive multivariate empirical mode decomposition (iMEMD). The Spatio-temporal analysis is performed with Complex Continuous Wavelet Transform (CCWT) to collect all the information in the time and frequency domains. The multiple model extraction method uses three deep neural networks (DNNs) to extract features and dissect them together to have a combined feature vector. To overcome the computational curse, we propose a method of differential entropy and mutual information, which further reduces feature size by selecting high-quality features and pooling the k-means results to produce less dimensional qualitative feature vectors. The system seems complex, but once the network is trained with this model, real-time application testing and validation with good classification performance is fast. The proposed method for selecting attributes for benchmarking is validated with two publicly available data sets, SEED, and DEAP. This method is less expensive to calculate than more modern sentiment recognition methods, provides real-time sentiment analysis, and offers good classification accuracy.
    Matched MeSH terms: Benchmarking
  20. Bahashwan AA, Anbar M, Manickam S, Issa G, Aladaileh MA, Alabsi BA, et al.
    PLoS One, 2024;19(2):e0297548.
    PMID: 38330004 DOI: 10.1371/journal.pone.0297548
    Software Defined Network (SDN) has alleviated traditional network limitations but faces a significant challenge due to the risk of Distributed Denial of Service (DDoS) attacks against an SDN controller, with current detection methods lacking evaluation on unrealistic SDN datasets and standard DDoS attacks (i.e., high-rate DDoS attack). Therefore, a realistic dataset called HLD-DDoSDN is introduced, encompassing prevalent DDoS attacks specifically aimed at an SDN controller, such as User Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). This SDN dataset also incorporates diverse levels of traffic fluctuations, representing different traffic variation rates (i.e., high and low rates) in DDoS attacks. It is qualitatively compared to existing SDN datasets and quantitatively evaluated across all eight scenarios to ensure its superiority. Furthermore, it fulfils the requirements of a benchmark dataset in terms of size, variety of attacks and scenarios, with significant features that highly contribute to detecting realistic SDN attacks. The features of HLD-DDoSDN are evaluated using a Deep Multilayer Perception (D-MLP) based detection approach. Experimental findings indicate that the employed features exhibit high performance in the detection accuracy, recall, and precision of detecting high and low-rate DDoS flooding attacks.
    Matched MeSH terms: Benchmarking*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links