Displaying publications 1 - 20 of 29 in total

  1. Izadi D, Abawajy JH, Ghanavati S, Herawan T
    Sensors (Basel), 2015;15(2):2964-79.
    PMID: 25635417 DOI: 10.3390/s150202964
    The success of a Wireless Sensor Network (WSN) deployment strongly depends on the quality of service (QoS) it provides regarding issues such as data accuracy, data aggregation delays and network lifetime maximisation. This is especially challenging in data fusion mechanisms, where a small fraction of low quality data in the fusion input may negatively impact the overall fusion result. In this paper, we present a fuzzy-based data fusion approach for WSN with the aim of increasing the QoS whilst reducing the energy consumption of the sensor network. The proposed approach is able to distinguish and aggregate only true values of the collected data as such, thus reducing the burden of processing the entire data at the base station (BS). It is also able to eliminate redundant data and consequently reduce energy consumption thus increasing the network lifetime. We studied the effectiveness of the proposed data fusion approach experimentally and compared it with two baseline approaches in terms of data collection, number of transferred data packets and energy consumption. The results of the experiments show that the proposed approach achieves better results than the baseline approaches.
    Matched MeSH terms: Data Accuracy
  2. Ahmed A, Sadullah AFM, Yahya AS
    Accid Anal Prev, 2019 Sep;130:3-21.
    PMID: 28764851 DOI: 10.1016/j.aap.2017.07.018
    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the variables in the categories of location, victim's information, vehicle's information, and environment was 27%, 37%, 16% and 19% respectively. Among the causes identified for errors in accident data reporting, Policing System was found to be the most important. Overall 26 causes of errors in accident data were discussed out of which 12 were related to reporting and 14 were related to recording. "Capture-Recapture" was the most widely used method among the 11 different methods: that can be used for the rectification of under-reporting. There were 12 studies pertinent to the rectification of accident location and almost all of them utilised a Geographical Information System (GIS) platform coupled with a matching algorithm to estimate the correct location. It is recommended that the policing system should be reformed and public awareness should be created to help reduce errors in accident data.
    Matched MeSH terms: Data Accuracy*
  3. Sharmini S, Jamaiyah H, Jaya Purany SP
    Malays Fam Physician, 2010;5(1):13-8.
    PMID: 25606180 MyJurnal
    This survey set out to describe patient registries available in the country, to determine their security features, data confidentiality, extent of outputs produced and data quality of the registries.
    Matched MeSH terms: Data Accuracy
  4. Wong LP
    Malays Fam Physician, 2008;3(1):14-20.
    PMID: 25606106 MyJurnal
    Qualitative data is often subjective, rich, and consists of in-depth information normally presented in the form of words. Analysing qualitative data entails reading a large amount of transcripts looking for similarities or differences, and subsequently finding themes and developing categories. Traditionally, researchers 'cut and paste' and use coloured pens to categorise data. Recently, the use of software specifically designed for qualitative data management greatly reduces technical sophistication and eases the laborious task, thus making the process relatively easier. A number of computer software packages has been developed to mechanise this 'coding' process as well as to search and retrieve data. This paper illustrates the ways in which NVivo can be used in the qualitative data analysis process. The basic features and primary tools of NVivo which assist qualitative researchers in managing and analysing their data are described.
    Matched MeSH terms: Data Accuracy
  5. Nolida Yussup, Nur Aira Abd. Rahman, Ismail Mustapha, Jaafar Abdullah, Mohd. Ashhar Khalid, Hearie Hassan, et al.
    Data transmission in field works especially that is related to industry, gas and chemical is paramount importance to ensure data accuracy and delivery time. A development of wireless detector system for remote data acquisition to be applied in conducting fieldwork in industry is described in this paper. A wireless communication which is applied in the project development is a viable and cost-effective method of transmitting data from the detector to the laptop on the site to facilitate data storage and analysis automatically, which can be used in various applications such as column scanning. The project involves hardware design for the detector and electronics parts besides programming for control board and user interface. A prototype of a wireless gamma scintillation detector is developed with capabilities of transmitting data to computer via radio frequency (RF) and recording the data within the 433MHz band at baud rate of 19200.
    Matched MeSH terms: Data Accuracy
  6. Ismail A, Idris MYI, Ayub MN, Por LY
    Sensors (Basel), 2018 Dec 10;18(12).
    PMID: 30544660 DOI: 10.3390/s18124353
    Smart manufacturing enables an efficient manufacturing process by optimizing production and product transaction. The optimization is performed through data analytics that requires reliable and informative data as input. Therefore, in this paper, an accurate data capture approach based on a vision sensor is proposed. Three image recognition methods are studied to determine the best vision-based classification technique, namely Bag of Words (BOW), Spatial Pyramid Matching (SPM) and Convolutional Neural Network (CNN). The vision-based classifiers categorize the apple as defective and non-defective that can be used for automatic inspection, sorting and further analytics. A total of 550 apple images are collected to test the classifiers. The images consist of 275 non-defective and 275 defective apples. The defective category includes various types of defect and severity. The vision-based classifiers are trained and evaluated according to the K-fold cross-validation. The performances of the classifiers from 2-fold, 3-fold, 4-fold, 5-fold and 10-fold are compared. From the evaluation, SPM with SVM classifier attained 98.15% classification accuracy for 10-fold and outperformed the others. In terms of computational time, CNN with SVM classifier is the fastest. However, minimal time difference is observed between the computational time of CNN and SPM, which were separated by only 0.05 s.
    Matched MeSH terms: Data Accuracy
  7. Nuryazmin Ahmat Zainuri, Abdul Aziz Jemain, Nora Muda
    Sains Malaysiana, 2015;44:449-456.
    This paper presents various imputation methods for air quality data specifically in Malaysia. The main objective was to
    select the best method of imputation and to compare whether there was any difference in the methods used between stations
    in Peninsular Malaysia. Missing data for various cases are randomly simulated with 5, 10, 15, 20, 25 and 30% missing.
    Six methods used in this paper were mean and median substitution, expectation-maximization (EM) method, singular
    value decomposition (SVD), K-nearest neighbour (KNN) method and sequential K-nearest neighbour (SKNN) method. The
    performance of the imputations is compared using the performance indicator: The correlation coefficient (R), the index
    of agreement (d) and the mean absolute error (MAE). Based on the result obtained, it can be concluded that EM, KNN
    and SKNN are the three best methods. The same result are obtained for all the eight monitoring station used in this study.
    Matched MeSH terms: Data Accuracy
  8. Norshahida Shaadan, Sayang Mohd Deni, Abdul Aziz Jemain
    Sains Malaysiana, 2015;44:1531-1540.
    In most research including environmental research, missing recorded data often exists and has become a common problem for data quality. In this study, several imputation methods that have been designed based on the techniques for functional data analysis are introduced and the capability of the methods for estimating missing values is investigated. Single imputation methods and iterative imputation methods are conducted by means of curve estimation using regression and roughness penalty smoothing approaches. The performance of the methods is compared using a reference data set, the real PM10 data from an air quality monitoring station namely the Petaling Jaya station located at the western part of Peninsular Malaysia. A hundred of the missing data sets that have been generated from a reference data set with six different patterns of missing values are used to investigate the performance of the considered methods. The patterns are simulated according to three percentages (5, 10 and 15) of missing values with respect to two different sizes (3 and 7) of maximum gap lengths (consecutive missing points). By means of the mean absolute error, the index of agreement and the coefficient of determination as the performance indicators, the results have showed that the iterative imputation method using the roughness penalty approach is more flexible and superior to other methods.
    Matched MeSH terms: Data Accuracy
    According to San Fillipo (2006), death is not the end of one’s existence, but rather than a transition from one life to another. However, it is different based on how the society and individuals see the concept of death itself and how they understand about it. Thus, this article aims to explore the understanding of the relationship between culture and religion that become their identity in terms of death and life after. Qualitative approach is adopted for this study. Indeed, interview and empirical observation were used to obtain quality data.
    Matched MeSH terms: Data Accuracy
  10. Abdulrauf Sharifai G, Zainol Z
    Genes (Basel), 2020 06 27;11(7).
    PMID: 32605144 DOI: 10.3390/genes11070717
    The training machine learning algorithm from an imbalanced data set is an inherently challenging task. It becomes more demanding with limited samples but with a massive number of features (high dimensionality). The high dimensional and imbalanced data set has posed severe challenges in many real-world applications, such as biomedical data sets. Numerous researchers investigated either imbalanced class or high dimensional data sets and came up with various methods. Nonetheless, few approaches reported in the literature have addressed the intersection of the high dimensional and imbalanced class problem due to their complicated interactions. Lately, feature selection has become a well-known technique that has been used to overcome this problem by selecting discriminative features that represent minority and majority class. This paper proposes a new method called Robust Correlation Based Redundancy and Binary Grasshopper Optimization Algorithm (rCBR-BGOA); rCBR-BGOA has employed an ensemble of multi-filters coupled with the Correlation-Based Redundancy method to select optimal feature subsets. A binary Grasshopper optimisation algorithm (BGOA) is used to construct the feature selection process as an optimisation problem to select the best (near-optimal) combination of features from the majority and minority class. The obtained results, supported by the proper statistical analysis, indicate that rCBR-BGOA can improve the classification performance for high dimensional and imbalanced datasets in terms of G-mean and the Area Under the Curve (AUC) performance metrics.
    Matched MeSH terms: Data Accuracy*
  11. Brown S, Muhamad N, C Pedley K, C Simcock D
    Mol Biol Res Commun, 2014 Mar;3(1):21-32.
    PMID: 27843974
    Even purified enzyme preparations are often heterogeneous. For example, preparations of aspartate aminotransferase or cytochrome oxidase can consist of several different forms of the enzyme. For this reason we consider how different the kinetics of the reactions catalysed by a mixture of forms of an enzyme must be to provide some indication of the characteristics of the species present. Based on the standard Michaelis-Menten model, we show that if the Michaelis constants (Km) of two isoforms differ by a factor of at least 20 the steady-state kinetics can be used to characterise the mixture. However, even if heterogeneity is reflected in the kinetic data, the proportions of the different forms of the enzyme cannot be estimated from the kinetic data alone. Consequently, the heterogeneity of enzyme preparations is rarely reflected in measurements of their steady-state kinetics unless the species present have significantly different kinetic properties. This has two implications: (1) it is difficult, but not impossible, to detect molecular heterogeneity using kinetic data and (2) even when it is possible, a considerable quantity of high quality data is required.
    Matched MeSH terms: Data Accuracy
  12. Majdi HS, Saud AN, Saud SN
    Materials (Basel), 2019 May 29;12(11).
    PMID: 31146451 DOI: 10.3390/ma12111752
    Porous γ-alumina is widely used as a catalyst carrier due to its chemical properties. These properties are strongly correlated with the physical properties of the material, such as porosity, density, shrinkage, and surface area. This study presents a technique that is less time consuming than other techniques to predict the values of the above-mentioned physical properties of porous γ-alumina via an artificial neural network (ANN) numerical model. The experimental data that was implemented was determined based on 30 samples that varied in terms of sintering temperature, yeast concentration, and socking time. Of the 30 experimental samples, 25 samples were used for training purposes, while the other five samples were used for the execution of the experimental procedure. The results showed that the prediction and experimental data were in good agreement, and it was concluded that the proposed model is proficient at providing high accuracy estimation data derived from any complex analytical equation.
    Matched MeSH terms: Data Accuracy
  13. Norhaiza K, Rozainee K, Mohd Noor Abdul H, Phillip H, McGill T, Zainah Ahmad Z
    Jurnal Psikologi Malaysia, 2016;30:102-113.
    This research examined how managers in universities incorporate non-financial measures in their Learning Management Systems decision-making processes and particularly focused on the importance of the Human Capital perspective in LMS decision making processes. A mixed-methods approach to data collection was used involving both interviews and questionnaires. The qualitative data from the interviews were coded and analysed. A descriptive coding method using thematic analysis was used for the data coding. The qualitative data were analysed using an inductive approach where the categories of criteria and indicators were not determined before the interview. The participants in this research were five members of LMS decision-making teams at two different universities in Australia and 24 participants from different universities in Malaysia who were involved in LMS decision- making processes at their universities. The results of this research indicated that there was substantial support for using a multi-dimensional decision making model among IT decision makers at universities, particularly the Human Capital perspective and they believed that Human Capital measures are important and should be considered in a LMS decision making process.The research has both implications for theory and for practitioners where it contributes to the knowledge on LMS decision making in universities and IT decision making in general, and also in improving actual decision making practices.
    Matched MeSH terms: Data Accuracy
  14. Alanazi HO, Abdullah AH, Qureshi KN, Ismail AS
    Ir J Med Sci, 2018 May;187(2):501-513.
    PMID: 28756541 DOI: 10.1007/s11845-017-1655-3
    INTRODUCTION: Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance.

    AIMS AND OBJECTIVES: In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life.

    CONCLUSION: The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

    Matched MeSH terms: Data Accuracy
  15. Nor Azira Ayob, Sity Daud, Nurul Nadia Abu Hassan
    Human resource development comprises skills, abilities, creativity and talent are amongst factors in
    human capital as well as emulous aspect. Hence, the emphasis on human capital development and
    emulous aspect is important to become a good leader for family, community and country. It is also
    important in ensuring entrepreneurs can compete in market economy today and they are able to meet
    customer demand. Thus, the objectives are the emphasis on the factors that are able to contribute in
    improving human capital and emulous of women. This is because, the right factor will enable the government to carry out in accordance with the factors that have been identified. In obtaining the
    factors contribute to human capital development, the survey method was conducted on 145 respondents
    among Bumiputera women entrepreneurs in Melaka state and supported with qualitative data from 10
    informants. The findings through exploratory factor analysis found that there are four main factors that
    contribute to human capital development among Bumiputera women entrepreneurs which are
    education and training, experience, social support and creativity, while three main factors that
    contribute to emulous among Bumiputera women entrepreneurs are financial assistance, facilities and
    infrastructure and commitment. Thus, the government is advised to emphasize on education and
    training as well as financial assistance to improve their abilities on human capital and emulous that is
    appropriate to support the women entrepreneurs need to increase their performance.
    Matched MeSH terms: Data Accuracy
  16. Chuah, S.Y., Thong, M.K.
    JUMMEC, 2018;21(2):53-58.
    There had been increased and strong public interests in rare diseases and orphan drugs as well as the issue of
    compulsory licencing for expensive medications in Malaysia in the mass-media and social media. We reviewed
    the issues of orphan drugs and the challenges faced in many countries in developing appropriate health financial
    modelling as well as getting accurate data on rare diseases. We also reviewed the old off-patent medications
    and the developments on how policy-makers can intervene to make expensive treatment affordable and
    sustainable for patients and the country.
    Matched MeSH terms: Data Accuracy
  17. He C, Levis B, Riehm KE, Saadat N, Levis AW, Azar M, et al.
    Psychother Psychosom, 2020;89(1):25-37.
    PMID: 31593971 DOI: 10.1159/000502294
    BACKGROUND: Screening for major depression with the Patient Health Questionnaire-9 (PHQ-9) can be done using a cutoff or the PHQ-9 diagnostic algorithm. Many primary studies publish results for only one approach, and previous meta-analyses of the algorithm approach included only a subset of primary studies that collected data and could have published results.

    OBJECTIVE: To use an individual participant data meta-analysis to evaluate the accuracy of two PHQ-9 diagnostic algorithms for detecting major depression and compare accuracy between the algorithms and the standard PHQ-9 cutoff score of ≥10.

    METHODS: Medline, Medline In-Process and Other Non-Indexed Citations, PsycINFO, Web of Science (January 1, 2000, to February 7, 2015). Eligible studies that classified current major depression status using a validated diagnostic interview.

    RESULTS: Data were included for 54 of 72 identified eligible studies (n participants = 16,688, n cases = 2,091). Among studies that used a semi-structured interview, pooled sensitivity and specificity (95% confidence interval) were 0.57 (0.49, 0.64) and 0.95 (0.94, 0.97) for the original algorithm and 0.61 (0.54, 0.68) and 0.95 (0.93, 0.96) for a modified algorithm. Algorithm sensitivity was 0.22-0.24 lower compared to fully structured interviews and 0.06-0.07 lower compared to the Mini International Neuropsychiatric Interview. Specificity was similar across reference standards. For PHQ-9 cutoff of ≥10 compared to semi-structured interviews, sensitivity and specificity (95% confidence interval) were 0.88 (0.82-0.92) and 0.86 (0.82-0.88).

    CONCLUSIONS: The cutoff score approach appears to be a better option than a PHQ-9 algorithm for detecting major depression.

    Matched MeSH terms: Data Accuracy*
  18. Sepucha KR, Matlock DD, Wills CE, Ropka M, Joseph-Williams N, Stacey D, et al.
    Med Decis Making, 2014 07;34(5):560-6.
    PMID: 24713692 DOI: 10.1177/0272989X14528381
    BACKGROUND: This review systematically appraises the quality of reporting of measures used in trials to evaluate the effectiveness of patient decision aids (PtDAs) and presents recommendations for minimum reporting standards.

    METHODS: We reviewed measures of decision quality and decision process in 86 randomized controlled trials (RCTs) from the 2011 Cochrane Collaboration systematic review of PtDAs. Data on development of the measures, reliability, validity, responsiveness, precision, interpretability, feasibility, and acceptability were independently abstracted by 2 reviewers.

    RESULTS: Information from 178 instances of use of measures was abstracted. Very few studies reported data on the performance of measures, with reliability (21%) and validity (16%) being the most common. Studies using new measures were less likely to include information about their psychometric performance. The review was limited to reporting of measures in studies included in the Cochrane review and did not consult prior publications.

    CONCLUSIONS: Very little is reported about the development or performance of measures used to evaluate the effectiveness of PtDAs in published trials. Minimum reporting standards are proposed to enable authors to prepare study reports, editors and reviewers to evaluate submitted papers, and readers to appraise published studies.

    Matched MeSH terms: Data Accuracy
  19. Mohd Said Nurumal, Sarah Sheikh Abdul Karim
    Information regarding out of hospital cardiac arrest incidence including outcomes in Malaysia is limited and fragmented. This study aims to identify the incidence and adherence to protocol of out of hospital cardiac arrest and also to explore the issues faced by pre-hospital personnel in regards to the management of cardiac arrest victim in Kuala Lumpur, Malaysia. A mixed method approach combining qualitative and quantitative study design was used. Two hundred eighty five (285) pre-hospital care data sheet for out of hospital cardiac arrest during the year of 2011 were examined by using checklists to identify the incidence and adherence to protocol. Nine semi-structured interviews and two focus group discussions were performed. Based on the overall incidence for out of hospital cardiac arrest cases which occurred in 2011 (n=285), the survival rate was 16.8%. On the adherence to protocol, only 89 (41.8%) of the cases adhered to the given protocol and 124 did not adhere to such protocol. All the relevant qualitative data were merged into few categories relating to issues that could affect the management of out of hospital cardiac arrest performed by pre-hospital care team. The essential elements in the handling of out of hospital cardiac arrest by pre-hospital care teamwasto ensure increased survival rates and excellent outcomes. Measures are needed to strengthen the quick activation of the pre-hospital care service, prompt bystander cardiopulmonary resuscitation, early defibrillation and timely advanced cardiac life support, and also to address all other issues highlighted in the qualitative results of this study.
    Matched MeSH terms: Data Accuracy
  20. Rusli R, Haque MM, Saifuzzaman M, King M
    Traffic Inj Prev, 2018;19(7):741-748.
    PMID: 29932734 DOI: 10.1080/15389588.2018.1482537
    OBJECTIVE: Traffic crashes along mountainous highways may lead to injuries and fatalities more often than along highways on plain topography; however, research focusing on the injury outcome of such crashes is relatively scant. The objective of this study was to investigate the factors affecting the likelihood that traffic crashes along rural mountainous highways result in injuries.

    METHOD: This study proposes a combination of decision tree and logistic regression techniques to model crash severity (injury vs. noninjury), because the combined approach allows the specification of nonlinearities and interactions in addition to main effects. Both a scobit model and a random parameters logit model, respectively accounting for an imbalance response variable and unobserved heterogeneities, are tested and compared. The study data set contains a total of 5 years of crash data (2008-2012) on selected mountainous highways in Malaysia. To enrich the data quality, an extensive field survey was conducted to collect detailed information on horizontal alignment, longitudinal grades, cross-section elements, and roadside features. In addition, weather condition data from the meteorology department were merged using the time stamp and proximity measures in AutoCAD-Geolocation.

    RESULTS: The random parameters logit model is found to outperform both the standard logit and scobit models, suggesting the importance of accounting for unobserved heterogeneity in crash severity models. Results suggest that proportion of segment lengths with simple curves, presence of horizontal curves along steep gradients, highway segments with unsealed shoulders, and highway segments with cliffs along both sides are positively associated with injury-producing crashes along rural mountainous highways. Interestingly, crashes during rainy conditions are associated with crashes that are less likely to involve injury. It is also found that the likelihood of injury-producing crashes decreases for rear-end collisions but increases for head-on collisions and crashes involving heavy vehicles. A higher order interaction suggests that single-vehicle crashes involving light and medium-sized vehicles are less severe along straight sections compared to road sections with horizontal curves. One the other hand, crash severity is higher when heavy vehicles are involved in crashes as single vehicles traveling along straight segments of rural mountainous highways.

    CONCLUSION: In addition to unobserved heterogeneity, it is important to account for higher order interactions to have a better understanding of factors that influence crash severity. A proper understanding of these factors will help develop targeted countermeasures to improve road safety along rural mountainous highways.

    Matched MeSH terms: Data Accuracy
Related Terms
Contact Us

Please provide feedback to Administrator (tengcl@gmail.com)

External Links