Displaying publications 1 - 20 of 34 in total

Abstract:
Sort:
  1. Sanjaya GY, Fauziah K, Pratama RA, Fitriani NA, Setiawan MY, Fauziah IA, et al.
    Med J Malaysia, 2024 Mar;79(2):176-183.
    PMID: 38553923
    INTRODUCTION: Assessment of data quality in the era of big data is crucial for effective data management and use. However, there are gaps in data quality assessment for routine health data to ensure accountability. Therefore, this research aims to improve the routine health data quality that have been collected and integrated into Aplikasi Satu Data Kesehatan (ASDK) as the primary health data system in Indonesia.

    MATERIALS AND METHODS: This descriptive study utilises a desk review approach and employs the WHO Data Quality Assurance (DQA) Tool to assess data quality of ASDK. The analysis involves measuring eight health indicators from ASDK and Survei Status Gizi Indonesia (SSGI) conducted in 2022. The assessment focuses on various dimensions of data quality, including completeness of variables, consistency over time, consistency between indicators, outliers and external consistency.

    RESULTS: Current study shows that routine health data in Indonesia performs high-quality data in terms of completeness and internal consistency. The dimension of data completeness demonstrates high levels of variable completeness with most variables achieving 100% of the completeness.

    CONCLUSION: Based on the analysis of eight routine health data variables using five dimensions of data quality namely completeness of variables, consistency over time, consistency between indicators, outliers. and external consistency. It shows that completeness and internal consistency of data in ASDK has demonstrated a high data quality.

    Matched MeSH terms: Data Accuracy*
  2. Thong KM, Jalalonmuhali M, Choo CL, Yee SY, Yahya R, Jeremiah PN, et al.
    Med J Malaysia, 2024 Mar;79(2):234-236.
    PMID: 38553931
    Diabetes mellitus is the main aetiology of end stage kidney disease (ESKD) in Malaysia. However, there may be concerns of over-reporting of diabetes mellitus as the cause of ESKD in the Malaysian Dialysis and Transplant Registry (MDTR). The objective of this audit is to assess the accuracy of data collected in the MDTR. There were 151 centres/source data providers (SDP) with a total of 1977 patients included in this audit. The audit showed that 80.2% of doctors' records matched the MDTR data. The results were comparable with published validation studies in other countries.
    Matched MeSH terms: Data Accuracy
  3. Lin GSS, Goh SM, Halil MHM
    Health Res Policy Syst, 2023 Sep 12;21(1):95.
    PMID: 37700266 DOI: 10.1186/s12961-023-01048-9
    BACKGROUND: The dental workforce plays a crucial role in delivering quality oral healthcare services, requiring continuous training and education to meet evolving professional demands. Understanding the impact of dental workforce training and education programmes on policy evolution is essential for refining existing policies, implementing evidence-based reforms and ensuring the growth of the dental profession. Therefore, this study protocol aims to assess the influence of dental workforce training and education programmes on policy evolution in Malaysia.

    METHODS: A mixed-method research design will be employed, combining quantitative surveys and qualitative interviews. Stakeholder theory and policy change models will form the theoretical framework of the study. Participants from various stakeholder groups will be recruited using purposive sampling. Data collection will involve surveys and one-on-one semi-structured interviews. Descriptive statistics, inferential analysis and thematic analysis will be used to analyse the data. Integration of quantitative and qualitative data will be used to provide a comprehensive understanding of the data.

    DISCUSSION: This study will shed light on factors influencing policy decisions related to dental education and workforce development in Malaysia. The findings will inform evidence-based decision-making, guide the enhancement of dental education programmes and improve the quality of oral healthcare services. Challenges related to participant recruitment and data collection should be considered, and the study's unique contribution to the existing body of knowledge in the Malaysian context will be discussed.

    Matched MeSH terms: Data Accuracy*
  4. Ni Chin WH, Li Z, Jiang N, Lim EH, Suang Lim JY, Lu Y, et al.
    J Mol Diagn, 2021 10;23(10):1359-1372.
    PMID: 34365011 DOI: 10.1016/j.jmoldx.2021.07.013
    Despite the immense genetic heterogeneity of B-lymphoblastic leukemia [or precursor B-cell acute lymphoblastic leukemia (B-ALL)], RNA sequencing (RNA-Seq) could comprehensively interrogate its genetic drivers, assigning a specific molecular subtype in >90% of patients. However, study groups have only started to use RNA-Seq. For broader clinical use, technical, quality control, and appropriate performance validation are needed. We describe the development and validation of an RNA-Seq workflow for subtype classification, TPMT/NUDT15/TP53 variant discovery, and immunoglobulin heavy chain (IGH) disease clone identification for Malaysia-Singapore acute lymphoblastic leukemia (ALL) 2020. We validated this workflow in 377 patients in our preceding Malaysia-Singapore ALL 2003/Malaysia-Singapore ALL 2010 studies and proposed the quality control measures for RNA quality, library size, sequencing, and data analysis using the International Organization for Standardization 15189 quality and competence standard for medical laboratories. Compared with conventional methods, we achieved >95% accuracy in oncogene fusion identification, digital karyotyping, and TPMT and NUDT15 variant discovery. We found seven pathogenic TP53 mutations, confirmed with Sanger sequencing, which conferred a poorer outcome. Applying this workflow prospectively to the first 21 patients in Malaysia-Singapore ALL 2020, we identified the genetic drivers and IGH disease clones in >90% of patients with concordant TPMT, NUDT15, and TP53 variants using PCR-based methods. The median turnaround time was 12 days, which was clinically actionable. In conclusion, RNA-Seq workflow could be used clinically in management of B-cell ALL patients.
    Matched MeSH terms: Data Accuracy
  5. Cuttiford L, Pimsler ML, Heo CC, Zheng L, Karunaratne I, Trissini G, et al.
    J Med Entomol, 2021 07 16;58(4):1654-1662.
    PMID: 33970239 DOI: 10.1093/jme/tjab081
    A basic tenet of forensic entomology is development data of an insect can be used to predict the time of colonization (TOC) by insect specimens collected from remains, and this prediction is related to the time of death and/or time of placement (TOP). However, few datasets have been evaluated to determine their accuracy or precision. The black soldier fly, Hermetia illucens (L.) (Diptera: Stratiomyidae) is recognized as an insect of forensic importance. This study examined the accuracy and precision of several development datasets for the black soldier fly by estimating the TOP of five sets of human and three sets of swine remains in San Marcos and College Station, TX, respectively. Data generated from this study indicate only one of these datasets consistently (time-to-prepupae 52%; time-to-eclosion 75%) produced TOP estimations that occurred within a day of the actual TOP of the remains. It is unknown if the precolonization interval (PreCI) of this species is long, but it has been observed that the species can colonize within 6 d after death. This assumption remains untested by validation studies. Accounting for this PreCI improved accuracy for the time-to-prepupae group, but reduced accuracy in the time-to-eclosion group. The findings presented here highlight a need for detailed, forensic-based development data for the black soldier fly that can reliably and accurately be used in casework. Finally, this study outlines the need for a basic understanding of the timing of resource utilization (i.e., duration of the PreCI) for forensically relevant taxa so that reasonable corrections may be made to TOC as related to minimum postmortem interval (mPMI) estimates.
    Matched MeSH terms: Data Accuracy
  6. Zahidi I, Wilson G, Brown K, Hou FKK
    J Health Pollut, 2020 Dec;10(28):201207.
    PMID: 33324504 DOI: 10.5696/2156-9614-10.28.201207
    Background: Rivers are susceptible to pollution and water pollution is a growing problem in low- and middle-income countries (LMIC) with rapid development and minimal environmental protections. There are universal pollutant threshold values, but they are not directly linked to river activities such as sand mining and aquaculture. Water quality modelling can support assessments of river pollution and provide information on this important environmental issue.

    Objectives: The objective of the present study was to demonstrate water quality modelling methodology in reviewing existing policies for Malaysian river catchments based on an example case study.

    Methods: The MIKE 11 software developed by the Danish Hydraulic Institute was used to model the main pollutant point sources within the study area - sand mining and aquaculture. Water quality data were obtained for six river stations from 2000 to 2015. All sand mining and aquaculture locations and approximate production capacities were quantified by ground survey. Modelling of the sand washing effluents was undertaken with the advection-dispersion module due to the nature of the fine sediment. Modelling of the fates of aquaculture deposits required both advection-dispersion and Danish Hydraulic Institute ECO Lab modules to simulate the detailed interactions between water quality determinants.

    Results: According to the Malaysian standard, biochemical oxygen command (BOD) and ammonium (NH4) parameters fell under Class IV at most of the river reaches, while the dissolved oxygen (DO) parameter varied between Classes II to IV. Total suspended solids (TSS) fell within Classes IV to V along the mid river reaches of the catchment.

    Discussion: Comparison between corresponding constituents and locations showed that the water quality model reproduced the long-term duration exceedance for the main body of the curves. However, the water quality model underestimated the infrequent high concentration observations. A standard effluent disposal was proposed for the development of legislation and regulations by authorities in the district that could be replicated for other similar catchments.

    Conclusions: Modelling pollutants enables observation of trends over the years and the percentage of time a certain class is exceeded for each individual pollutant. The catchment did not meet Class II requirements and may not be able to reach Class I without extensive improvements in the quality and reducing the quantity of both point and non-point effluent sources within the catchment.

    Competing Interests: The authors declare no competing financial interests.

    Matched MeSH terms: Data Accuracy
  7. Serebruany V, Tanguay JF, Benavides MA, Cabrera-Fuentes H, Eisert W, Kim MH, et al.
    Am J Ther, 2020 10 29;27(6):e563-e572.
    PMID: 33109913 DOI: 10.1097/MJT.0000000000001286
    BACKGROUND: Excess vascular deaths in the PLATO trial comparing ticagrelor to clopidogrel have been repeatedly challenged by the Food and Drug Administration (FDA) reviewers and academia. Based on the Freedom of Information Act, BuzzFeed won a court order and shared with us the complete list of reported deaths for the ticagrelor FDA New Drug Application (NDA) 22-433. This dataset was matched against local patient-level records from PLATO sites monitored by the sponsor.

    STUDY QUESTION: Whether FDA death data in the PLATO trial matched the local site records.

    STUDY DESIGN: The NDA spreadsheet contains 938 precisely detailed PLATO deaths. We obtained and validated local evidence for 52 deaths among 861 PLATO patients from 14 enrolling sites in 8 countries and matched those with the official NDA dataset submitted to the FDA.

    MEASURES AND OUTCOMES: Existence, precise time, and primary cause of deaths in PLATO.

    RESULTS: Discrepant to the NDA document, sites confirmed 2 extra unreported deaths (Poland and Korea) and failed to confirm 4 deaths (Malaysia). Of the remaining 46 deaths, dates were reported correctly for 42 patients, earlier (2 clopidogrel), or later (2 ticagrelor) than the actual occurrence of death. In 12 clopidogrel patients, cause of death was changed to "vascular," whereas 6 NDA ticagrelor "nonvascular" or "unknown" deaths were site-reported as of "vascular" origin. Sudden death was incorrectly reported in 4 clopidogrel patients, but omitted in 4 ticagrelor patients directly affecting the primary efficacy PLATO endpoint.

    CONCLUSIONS: Many deaths were inaccurately reported in PLATO favoring ticagrelor. The full extent of mortality misreporting is currently unclear, while especially worrisome is a mismatch in identifying primary death cause. Because all PLATO events are kept in the cloud electronic Medidata Rave capture system, securing the database content, examining the dataset changes or/and repeated entries, identifying potential interference origin, and assessing full magnitude of the problem are warranted.

    Matched MeSH terms: Data Accuracy*
  8. Lou J, Kc S, Toh KY, Dabak S, Adler A, Ahn J, et al.
    Int J Technol Assess Health Care, 2020 Oct;36(5):474-480.
    PMID: 32928330 DOI: 10.1017/S0266462320000628
    There is growing interest globally in using real-world data (RWD) and real-world evidence (RWE) for health technology assessment (HTA). Optimal collection, analysis, and use of RWD/RWE to inform HTA requires a conceptual framework to standardize processes and ensure consistency. However, such framework is currently lacking in Asia, a region that is likely to benefit from RWD/RWE for at least two reasons. First, there is often limited Asian representation in clinical trials unless specifically conducted in Asian populations, and RWD may help to fill the evidence gap. Second, in a few Asian health systems, reimbursement decisions are not made at market entry; thus, allowing RWD/RWE to be collected to give more certainty about the effectiveness of technologies in the local setting and inform their appropriate use. Furthermore, an alignment of RWD/RWE policies across Asia would equip decision makers with context-relevant evidence, and improve timely patient access to new technologies. Using data collected from eleven health systems in Asia, this paper provides a review of the current landscape of RWD/RWE in Asia to inform HTA and explores a way forward to align policies within the region. This paper concludes with a proposal to establish an international collaboration among academics and HTA agencies in the region: the REAL World Data In ASia for HEalth Technology Assessment in Reimbursement (REALISE) working group, which seeks to develop a non-binding guidance document on the use of RWD/RWE to inform HTA for decision making in Asia.
    Matched MeSH terms: Data Accuracy
  9. Abdulrauf Sharifai G, Zainol Z
    Genes (Basel), 2020 06 27;11(7).
    PMID: 32605144 DOI: 10.3390/genes11070717
    The training machine learning algorithm from an imbalanced data set is an inherently challenging task. It becomes more demanding with limited samples but with a massive number of features (high dimensionality). The high dimensional and imbalanced data set has posed severe challenges in many real-world applications, such as biomedical data sets. Numerous researchers investigated either imbalanced class or high dimensional data sets and came up with various methods. Nonetheless, few approaches reported in the literature have addressed the intersection of the high dimensional and imbalanced class problem due to their complicated interactions. Lately, feature selection has become a well-known technique that has been used to overcome this problem by selecting discriminative features that represent minority and majority class. This paper proposes a new method called Robust Correlation Based Redundancy and Binary Grasshopper Optimization Algorithm (rCBR-BGOA); rCBR-BGOA has employed an ensemble of multi-filters coupled with the Correlation-Based Redundancy method to select optimal feature subsets. A binary Grasshopper optimisation algorithm (BGOA) is used to construct the feature selection process as an optimisation problem to select the best (near-optimal) combination of features from the majority and minority class. The obtained results, supported by the proper statistical analysis, indicate that rCBR-BGOA can improve the classification performance for high dimensional and imbalanced datasets in terms of G-mean and the Area Under the Curve (AUC) performance metrics.
    Matched MeSH terms: Data Accuracy*
  10. Htay MNN, McMonnies K, Kalua T, Ferley D, Hassanein M
    PMID: 32489996 DOI: 10.4103/jehp.jehp_321_18
    CONTEXT: In the era of technology, social networking has become a platform for the teaching-learning process. Exploring international students' perspective on using Twitter would reveal the barriers and potential for its use in higher educational activities.

    AIMS: This study aimed to explore the postgraduate students' perspective on using Twitter as a learning resource.

    SUBJECTS AND METHODS: This qualitative study was conducted as part of a postgraduate program at a university in the United Kingdom. A focus group discussion and five in-depth interviews were conducted after receiving the informed consent. The qualitative data were analyzed by R package for Qualitative Data Analysis software.

    ANALYSIS USED: Deductive content analysis was used in this study.

    RESULTS: Qualitative analysis revealed four salient themes, which were (1) background knowledge about Twitter, (2) factors influencing the usage of Twitter, (3) master's students' experiences on using Twitter for education, and (4) potential of using Twitter in the postgraduate study. The students preferred to use Twitter for sharing links and appreciated the benefit on immediate dissemination of information. Meanwhile, privacy concern, unfamiliarity, and hesitation to participate in discussion discouraged the students from using Twitter as a learning platform.

    CONCLUSIONS: Using social media platforms in education could be challenging for both the learners and the educators. Our study revealed that Twitter was mainly used for social communication among postgraduate students however most could see a benefit of using Twitter for their learning if they received adequate guidance on how to use the platform. The multiple barriers to using Twitter were mainly related to unfamiliarity which should be addressed early in the learning process.

    Matched MeSH terms: Data Accuracy
  11. He C, Levis B, Riehm KE, Saadat N, Levis AW, Azar M, et al.
    Psychother Psychosom, 2020;89(1):25-37.
    PMID: 31593971 DOI: 10.1159/000502294
    BACKGROUND: Screening for major depression with the Patient Health Questionnaire-9 (PHQ-9) can be done using a cutoff or the PHQ-9 diagnostic algorithm. Many primary studies publish results for only one approach, and previous meta-analyses of the algorithm approach included only a subset of primary studies that collected data and could have published results.

    OBJECTIVE: To use an individual participant data meta-analysis to evaluate the accuracy of two PHQ-9 diagnostic algorithms for detecting major depression and compare accuracy between the algorithms and the standard PHQ-9 cutoff score of ≥10.

    METHODS: Medline, Medline In-Process and Other Non-Indexed Citations, PsycINFO, Web of Science (January 1, 2000, to February 7, 2015). Eligible studies that classified current major depression status using a validated diagnostic interview.

    RESULTS: Data were included for 54 of 72 identified eligible studies (n participants = 16,688, n cases = 2,091). Among studies that used a semi-structured interview, pooled sensitivity and specificity (95% confidence interval) were 0.57 (0.49, 0.64) and 0.95 (0.94, 0.97) for the original algorithm and 0.61 (0.54, 0.68) and 0.95 (0.93, 0.96) for a modified algorithm. Algorithm sensitivity was 0.22-0.24 lower compared to fully structured interviews and 0.06-0.07 lower compared to the Mini International Neuropsychiatric Interview. Specificity was similar across reference standards. For PHQ-9 cutoff of ≥10 compared to semi-structured interviews, sensitivity and specificity (95% confidence interval) were 0.88 (0.82-0.92) and 0.86 (0.82-0.88).

    CONCLUSIONS: The cutoff score approach appears to be a better option than a PHQ-9 algorithm for detecting major depression.

    Matched MeSH terms: Data Accuracy*
  12. Ahmed A, Sadullah AFM, Yahya AS
    Accid Anal Prev, 2019 Sep;130:3-21.
    PMID: 28764851 DOI: 10.1016/j.aap.2017.07.018
    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the variables in the categories of location, victim's information, vehicle's information, and environment was 27%, 37%, 16% and 19% respectively. Among the causes identified for errors in accident data reporting, Policing System was found to be the most important. Overall 26 causes of errors in accident data were discussed out of which 12 were related to reporting and 14 were related to recording. "Capture-Recapture" was the most widely used method among the 11 different methods: that can be used for the rectification of under-reporting. There were 12 studies pertinent to the rectification of accident location and almost all of them utilised a Geographical Information System (GIS) platform coupled with a matching algorithm to estimate the correct location. It is recommended that the policing system should be reformed and public awareness should be created to help reduce errors in accident data.
    Matched MeSH terms: Data Accuracy*
  13. Majdi HS, Saud AN, Saud SN
    Materials (Basel), 2019 May 29;12(11).
    PMID: 31146451 DOI: 10.3390/ma12111752
    Porous γ-alumina is widely used as a catalyst carrier due to its chemical properties. These properties are strongly correlated with the physical properties of the material, such as porosity, density, shrinkage, and surface area. This study presents a technique that is less time consuming than other techniques to predict the values of the above-mentioned physical properties of porous γ-alumina via an artificial neural network (ANN) numerical model. The experimental data that was implemented was determined based on 30 samples that varied in terms of sintering temperature, yeast concentration, and socking time. Of the 30 experimental samples, 25 samples were used for training purposes, while the other five samples were used for the execution of the experimental procedure. The results showed that the prediction and experimental data were in good agreement, and it was concluded that the proposed model is proficient at providing high accuracy estimation data derived from any complex analytical equation.
    Matched MeSH terms: Data Accuracy
  14. Daniyal WMEMM, Fen YW, Abdullah J, Sadrolhosseini AR, Saleviter S, Omar NAS
    PMID: 30594850 DOI: 10.1016/j.saa.2018.12.031
    Surface plasmon resonance (SPR) is a label-free optical spectroscopy that is widely used for biomolecular interaction analysis. In this work, SPR was used to characterize the binding properties of highly sensitive nanocrystalline cellulose-graphene oxide based nanocomposite (CTA-NCC/GO) towards nickel ion. The formation of CTA-NCC/GO nanocomposite has been confirmed by FT-IR. The SPR analysis result shows that the CTA-NCC/GO has high binding affinity towards Ni2+ from 0.01 until 0.1 ppm with binding affinity constant of 1.620 × 103 M-1. The sensitivity for the CTA-NCC/GO calculated was 1.509° ppm-1. The full width at half maximum (FWHM), data accuracy (DA), and signal-to-noise ratio (SNR) have also been determined using the obtained SPR curve. For the FWHM, the value was 2.25° at 0.01 until 0.08 ppm and decreases to 2.12° at 0.1 until 10 ppm. The DA for the SPR curves is the highest at 0.01 until 0.08 ppm and lowest at 0.1 until 10 ppm. The SNR curves mirrors the curves of SPR angle shift where the SNR increases with the Ni2+ concentrations. For the selectivity test, the CTA-NCC/GO has the abilities to differentiate Ni2+ in the mixture of metal ions.
    Matched MeSH terms: Data Accuracy
  15. Ayodele FO, Yao L, Haron H
    Sci Eng Ethics, 2019 04;25(2):357-382.
    PMID: 29441445 DOI: 10.1007/s11948-017-9941-z
    In the management academic research, academic advancement, job security, and the securing of research funds at one's university are judged mainly by one's output of publications in high impact journals. With bogus resumes filled with published journal articles, universities and other allied institutions are keen to recruit or sustain the appointment of such academics. This often places undue pressure on aspiring academics and on those already recruited to engage in research misconduct which often leads to research integrity. This structured review focuses on the ethics and integrity of management research through an analysis of retracted articles published from 2005 to 2016. The study employs a structured literature review methodology whereby retracted articles published between 2005 and 2016 in the field of management science were found using Crossref and Google Scholar. The searched articles were then streamlined by selecting articles based on their relevance and content in accordance with the inclusion criteria. Based on the analysed retracted articles, the study shows evidence of ethical misconduct among researchers of management science. Such misconduct includes data falsification, the duplication of submitted articles, plagiarism, data irregularity and incomplete citation practices. Interestingly, the analysed results indicate that the field of knowledge management includes the highest number of retracted articles, with plagiarism constituting the most significant ethical issue. Furthermore, the findings of this study show that ethical misconduct is not restricted to a particular geographic location; it occurs in numerous countries. In turn, avenues of further study on research misconduct in management research are proposed.
    Matched MeSH terms: Data Accuracy
  16. Mohd Nor NA, Taib NA, Saad M, Zaini HS, Ahmad Z, Ahmad Y, et al.
    BMC Bioinformatics, 2019 Feb 04;19(Suppl 13):402.
    PMID: 30717675 DOI: 10.1186/s12859-018-2406-9
    BACKGROUND: Advances in medical domain has led to an increase of clinical data production which offers enhancement opportunities for clinical research sector. In this paper, we propose to expand the scope of Electronic Medical Records in the University Malaya Medical Center (UMMC) using different techniques in establishing interoperability functions between multiple clinical departments involving diagnosis, screening and treatment of breast cancer and building automatic systems for clinical audits as well as for potential data mining to enhance clinical breast cancer research in the future.

    RESULTS: Quality Implementation Framework (QIF) was adopted to develop the breast cancer module as part of the in-house EMR system used at UMMC, called i-Pesakit©. The completion of the i-Pesakit© Breast Cancer Module requires management of clinical data electronically, integration of clinical data from multiple internal clinical departments towards setting up of a research focused patient data governance model. The 14 QIF steps were performed in four main phases involved in this study which are (i) initial considerations regarding host setting, (ii) creating structure for implementation, (iii) ongoing structure once implementation begins, and (iv) improving future applications. The architectural framework of the module incorporates both clinical and research needs that comply to the Personal Data Protection Act.

    CONCLUSION: The completion of the UMMC i-Pesakit© Breast Cancer Module required populating EMR including management of clinical data access, establishing information technology and research focused governance model and integrating clinical data from multiple internal clinical departments. This multidisciplinary collaboration has enhanced the quality of data capture in clinical service, benefited hospital data monitoring, quality assurance, audit reporting and research data management, as well as a framework for implementing a responsive EMR for a clinical and research organization in a typical middle-income country setting. Future applications include establishing integration with external organization such as the National Registration Department for mortality data, reporting of institutional data for national cancer registry as well as data mining for clinical research. We believe that integration of multiple clinical visit data sources provides a more comprehensive, accurate and real-time update of clinical data to be used for epidemiological studies and audits.

    Matched MeSH terms: Data Accuracy
  17. Najam M, Rasool RU, Ahmad HF, Ashraf U, Malik AW
    Biomed Res Int, 2019;2019:7074387.
    PMID: 31111064 DOI: 10.1155/2019/7074387
    Storing and processing of large DNA sequences has always been a major problem due to increasing volume of DNA sequence data. However, a number of solutions have been proposed but they require significant computation and memory. Therefore, an efficient storage and pattern matching solution is required for DNA sequencing data. Bloom filters (BFs) represent an efficient data structure, which is mostly used in the domain of bioinformatics for classification of DNA sequences. In this paper, we explore more dimensions where BFs can be used other than classification. A proposed solution is based on Multiple Bloom Filters (MBFs) that finds all the locations and number of repetitions of the specified pattern inside a DNA sequence. Both of these factors are extremely important in determining the type and intensity of any disease. This paper serves as a first effort towards optimizing the search for location and frequency of substrings in DNA sequences using MBFs. We expect that further optimizations in the proposed solution can bring remarkable results as this paper presents a proof of concept implementation for a given set of data using proposed MBFs technique. Performance evaluation shows improved accuracy and time efficiency of the proposed approach.
    Matched MeSH terms: Data Accuracy
  18. Ismail A, Idris MYI, Ayub MN, Por LY
    Sensors (Basel), 2018 Dec 10;18(12).
    PMID: 30544660 DOI: 10.3390/s18124353
    Smart manufacturing enables an efficient manufacturing process by optimizing production and product transaction. The optimization is performed through data analytics that requires reliable and informative data as input. Therefore, in this paper, an accurate data capture approach based on a vision sensor is proposed. Three image recognition methods are studied to determine the best vision-based classification technique, namely Bag of Words (BOW), Spatial Pyramid Matching (SPM) and Convolutional Neural Network (CNN). The vision-based classifiers categorize the apple as defective and non-defective that can be used for automatic inspection, sorting and further analytics. A total of 550 apple images are collected to test the classifiers. The images consist of 275 non-defective and 275 defective apples. The defective category includes various types of defect and severity. The vision-based classifiers are trained and evaluated according to the K-fold cross-validation. The performances of the classifiers from 2-fold, 3-fold, 4-fold, 5-fold and 10-fold are compared. From the evaluation, SPM with SVM classifier attained 98.15% classification accuracy for 10-fold and outperformed the others. In terms of computational time, CNN with SVM classifier is the fastest. However, minimal time difference is observed between the computational time of CNN and SPM, which were separated by only 0.05 s.
    Matched MeSH terms: Data Accuracy
  19. Alanazi HO, Abdullah AH, Qureshi KN, Ismail AS
    Ir J Med Sci, 2018 May;187(2):501-513.
    PMID: 28756541 DOI: 10.1007/s11845-017-1655-3
    INTRODUCTION: Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance.

    AIMS AND OBJECTIVES: In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life.

    CONCLUSION: The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

    Matched MeSH terms: Data Accuracy
  20. Lim YMF, Yusof M, Sivasampu S
    Int J Health Care Qual Assur, 2018 Apr 16;31(3):203-213.
    PMID: 29687760 DOI: 10.1108/IJHCQA-08-2016-0111
    Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.
    Matched MeSH terms: Data Accuracy*
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links