Displaying publications 1 - 20 of 65 in total

Abstract:
Sort:
  1. Zaki R, Bulgiba A, Ismail NA
    Prev Med, 2013;57 Suppl:S80-2.
    PMID: 23313586 DOI: 10.1016/j.ypmed.2013.01.003
    The Bland-Altman method is the most popular method used to assess the agreement of medical instruments. The main concern about this method is the presence of proportional bias. The slope of the regression line fitted to the Bland-Altman plot should be tested to exclude proportional bias. The aim of this study was to determine whether the overestimation of bias in the Bland-Altman analysis is still present even when the proportional bias has been excluded.
    Matched MeSH terms: Data Interpretation, Statistical
  2. Zabidin N, Mohamed AM, Zaharim A, Marizan Nor M, Rosli TI
    Int Orthod, 2018 03;16(1):133-143.
    PMID: 29478934 DOI: 10.1016/j.ortho.2018.01.009
    OBJECTIVES: To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation.

    MATERIALS AND METHODS: This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated.

    RESULTS: Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable.

    CONCLUSION: The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch.

    Matched MeSH terms: Data Interpretation, Statistical*
  3. Yvonne-Tee GB, Rasool AH, Halim AS, Rahman AR
    J Pharmacol Toxicol Methods, 2005 Sep-Oct;52(2):286-92.
    PMID: 16125628
    Postocclusive reactive hyperemia in forearm skin is a commonly used model for studying microvascular reactivity function, particularly in the assessment of vascular effect of topically applied pharmacological substances. In this study, we investigated the reproducibility of several different laser-Doppler-derived parameters in the measurement of postocclusive reactive hyperemia at forearm skin in healthy subjects.
    Matched MeSH terms: Data Interpretation, Statistical
  4. Yin LK, Rajeswari M
    Biomed Mater Eng, 2014;24(6):3333-41.
    PMID: 25227043 DOI: 10.3233/BME-141156
    To segment an image using the random walks algorithm; users are often required to initialize the approximate locations of the objects and background in the image. Due to its segmenting model that is mainly reflected by the relationship among the neighborhood pixels and its boundary conditions, random walks algorithm has made itself sensitive to the inputs of the seeds. Instead of considering the relationship between the neighborhood pixels solely, an attempt has been made to modify the weighting function that accounts for the intensity changes between the neighborhood nodes. Local affiliation within the defined neighborhood region of the two nodes is taken into consideration by incorporating an extra penalty term into the weighting function. Besides that, to better segment images, particularly medical images with texture features, GLCM variance is incorporated into the weighting function through kernel density estimation (KDE). The probability density of each pixel belonging to the initialized seeds is estimated and integrated into the weighting function. To test the performance of the proposed weighting model, several medical images that mainly made up of 174-brain tumor images are experimented. These experiments establish that the proposed method produces better segmentation results than the original random walks.
    Matched MeSH terms: Data Interpretation, Statistical*
  5. Wahab AA, Salim MI, Ahamat MA, Manaf NA, Yunus J, Lai KW
    Med Biol Eng Comput, 2016 Sep;54(9):1363-73.
    PMID: 26463520 DOI: 10.1007/s11517-015-1403-7
    Breast cancer is the most common cancer among women globally, and the number of young women diagnosed with this disease is gradually increasing over the years. Mammography is the current gold-standard technique although it is known to be less sensitive in detecting tumors in woman with dense breast tissue. Detecting an early-stage tumor in young women is very crucial for better survival chance and treatment. The thermography technique has the capability to provide an additional functional information on physiological changes to mammography by describing thermal and vascular properties of the tissues. Studies on breast thermography have been carried out to improve the accuracy level of the thermography technique in various perspectives. However, the limitation of gathering women affected by cancer in different age groups had necessitated this comprehensive study which is aimed to investigate the effect of different density levels on the surface temperature distribution profile of the breast models. These models, namely extremely dense (ED), heterogeneously dense (HD), scattered fibroglandular (SF), and predominantly fatty (PF), with embedded tumors were developed using the finite element method. A conventional Pennes' bioheat model was used to perform the numerical simulation on different case studies, and the results obtained were then compared using a hypothesis statistical analysis method to the reference breast model developed previously. The results obtained show that ED, SF, and PF breast models had significant mean differences in surface temperature profile with a p value <0.025, while HD breast model data pair agreed with the null hypothesis formulated due to the comparable tissue composition percentage to the reference model. The findings suggested that various breast density levels should be considered as a contributing factor to the surface thermal distribution profile alteration in both breast cancer detection and analysis when using the thermography technique.
    Matched MeSH terms: Data Interpretation, Statistical
  6. Teoh AB, Goh A, Ngo DC
    IEEE Trans Pattern Anal Mach Intell, 2006 Dec;28(12):1892-901.
    PMID: 17108365
    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
    Matched MeSH terms: Data Interpretation, Statistical
  7. Tan CS, Ting WS, Mohamad MS, Chan WH, Deris S, Shah ZA
    Biomed Res Int, 2014;2014:213656.
    PMID: 25250315 DOI: 10.1155/2014/213656
    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method.
    Matched MeSH terms: Data Interpretation, Statistical*
  8. Sudarshan VK, Acharya UR, Oh SL, Adam M, Tan JH, Chua CK, et al.
    Comput Biol Med, 2017 04 01;83:48-58.
    PMID: 28231511 DOI: 10.1016/j.compbiomed.2017.01.019
    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments.
    Matched MeSH terms: Data Interpretation, Statistical
  9. Soleimani Amiri M, Ramli R
    Sensors (Basel), 2021 May 03;21(9).
    PMID: 34063574 DOI: 10.3390/s21093171
    It is necessary to control the movement of a complex multi-joint structure such as a robotic arm in order to reach a target position accurately in various applications. In this paper, a hybrid optimal Genetic-Swarm solution for the Inverse Kinematic (IK) solution of a robotic arm is presented. Each joint is controlled by Proportional-Integral-Derivative (PID) controller optimized with the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), called Genetic-Swarm Optimization (GSO). GSO solves the IK of each joint while the dynamic model is determined by the Lagrangian. The tuning of the PID is defined as an optimization problem and is solved by PSO for the simulated model in a virtual environment. A Graphical User Interface has been developed as a front-end application. Based on the combination of hybrid optimal GSO and PID control, it is ascertained that the system works efficiently. Finally, we compare the hybrid optimal GSO with conventional optimization methods by statistic analysis.
    Matched MeSH terms: Data Interpretation, Statistical
  10. Sim SZ, Gupta RC, Ong SH
    Int J Biostat, 2018 Jan 09;14(1).
    PMID: 29306919 DOI: 10.1515/ijb-2016-0070
    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.
    Matched MeSH terms: Data Interpretation, Statistical*
  11. Ser G, Keskin S, Can Yilmaz M
    Sains Malaysiana, 2016;45:1755-1761.
    Multiple imputation method is a widely used method in missing data analysis. The method consists of a three-stage
    process including imputation, analyzing and pooling. The number of imputations to be selected in the imputation step
    in the first stage is important. Hence, this study aimed to examine the performance of multiple imputation method at
    different numbers of imputations. Monotone missing data pattern was created in the study by deleting approximately 24%
    of the observations from the continuous result variable with complete data. At the first stage of the multiple imputation
    method, monotone regression imputation at different numbers of imputations (m=3, 5, 10 and 50) was performed. In the
    second stage, parameter estimations and their standard errors were obtained by applying general linear model to each
    of the complete data sets obtained. In the final stage, the obtained results were pooled and the effect of the numbers of
    imputations on parameter estimations and their standard errors were evaluated on the basis of these results. In conclusion,
    efficiency of parameter estimations at the number of imputation m=50 was determined as about 99%. Hence, at the
    determined missing observation rate, increase was determined in efficiency and performance of the multiple imputation
    method as the number of imputations increased.
    Matched MeSH terms: Data Interpretation, Statistical
  12. Sanagi MM, Ling SL, Nasir Z, Hermawan D, Ibrahim WA, Abu Naim A
    J AOAC Int, 2010 2 20;92(6):1833-8.
    PMID: 20166602
    LOD and LOQ are two important performance characteristics in method validation. This work compares three methods based on the International Conference on Harmonization and EURACHEM guidelines, namely, signal-to-noise, blank determination, and linear regression, to estimate the LOD and LOQ for volatile organic compounds (VOCs) by experimental methodology using GC. Five VOCs, toluene, ethylbenzene, isopropylbenzene, n-propylbenzene, and styrene, were chosen for the experimental study. The results indicated that the estimated LODs and LOQs were not equivalent and could vary by a factor of 5 to 6 for the different methods. It is, therefore, essential to have a clearly described procedure for estimating the LOD and LOQ during method validation to allow interlaboratory comparisons.
    Matched MeSH terms: Data Interpretation, Statistical
  13. Rohman A, Ariani R
    ScientificWorldJournal, 2013;2013:740142.
    PMID: 24319381 DOI: 10.1155/2013/740142
    Fourier transform infrared spectroscopy (FTIR) combined with multivariate calibration of partial least square (PLS) was developed and optimized for the analysis of Nigella seed oil (NSO) in binary and ternary mixtures with corn oil (CO) and soybean oil (SO). Based on PLS modeling performed, quantitative analysis of NSO in binary mixtures with CO carried out using the second derivative FTIR spectra at combined frequencies of 2977-3028, 1666-1739, and 740-1446 cm(-1) revealed the highest value of coefficient of determination (R (2), 0.9984) and the lowest value of root mean square error of calibration (RMSEC, 1.34% v/v). NSO in binary mixtures with SO is successfully determined at the combined frequencies of 2985-3024 and 752-1755 cm(-1) using the first derivative FTIR spectra with R (2) and RMSEC values of 0.9970 and 0.47% v/v, respectively. Meanwhile, the second derivative FTIR spectra at the combined frequencies of 2977-3028 cm(-1), 1666-1739 cm(-1), and 740-1446 cm(-1) were selected for quantitative analysis of NSO in ternary mixture with CO and SO with R (2) and RMSEC values of 0.9993 and 0.86% v/v, respectively. The results showed that FTIR spectrophotometry is an accurate technique for the quantitative analysis of NSO in binary and ternary mixtures with CO and SO.
    Matched MeSH terms: Data Interpretation, Statistical
  14. Razali R, Ahmad F, Rahman FN, Midin M, Sidi H
    Clin Neurol Neurosurg, 2011 Oct;113(8):639-43.
    PMID: 21684679 DOI: 10.1016/j.clineuro.2011.05.008
    Parkinson disease (PD) affects the lives of both the individuals and their family members. This study aims at investigating for clinical as well as socio-demographic factors associated with the perception of burden among the caregivers of individuals with PD in Malaysia.
    Matched MeSH terms: Data Interpretation, Statistical
  15. Poh YW, Gan SY, Tan EL
    Exp Oncol, 2012 Jul;34(2):85-9.
    PMID: 23013758
    The aim of this study is to investigate whether IL-6, IL-10 and TGF-β are able to confer resistance to apoptosis in nasopharyngeal carcinoma cells by upregulating the expression of survivin.
    Matched MeSH terms: Data Interpretation, Statistical
  16. Ovesen C, Jakobsen JC, Gluud C, Steiner T, Law Z, Flaherty K, et al.
    BMC Res Notes, 2018 Jun 13;11(1):379.
    PMID: 29895329 DOI: 10.1186/s13104-018-3481-8
    OBJECTIVE: We present the statistical analysis plan of a prespecified Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (TICH)-2 sub-study aiming to investigate, if tranexamic acid has a different effect in intracerebral haemorrhage patients with the spot sign on admission compared to spot sign negative patients. The TICH-2 trial recruited above 2000 participants with intracerebral haemorrhage arriving in hospital within 8 h after symptom onset. They were included irrespective of radiological signs of on-going haematoma expansion. Participants were randomised to tranexamic acid versus matching placebo. In this subgroup analysis, we will include all participants in TICH-2 with a computed tomography angiography on admission allowing adjudication of the participants' spot sign status.

    RESULTS: Primary outcome will be the ability of tranexamic acid to limit absolute haematoma volume on computed tomography at 24 h (± 12 h) after randomisation among spot sign positive and spot sign negative participants, respectively. Within all outcome measures, the effect of tranexamic acid in spot sign positive/negative participants will be compared using tests of interaction. This sub-study will investigate the important clinical hypothesis that spot sign positive patients might benefit more from administration of tranexamic acid compared to spot sign negative patients. Trial registration ISRCTN93732214 ( http://www.isrctn.com ).

    Matched MeSH terms: Data Interpretation, Statistical*
  17. Ong HC, Soo KL
    Med J Malaysia, 2006 Dec;61(5):616-20.
    PMID: 17623964 MyJurnal
    It has been almost two decades ago since the first AIDS case was reported in Malaysia. It has also been approximately eight years ago when the method of backcalculation was used to estimate the past HIV infection rate from the AIDS incidence data and an estimate of the incubation period distribution. This method is used because it makes use of the Malaysian AIDS incidence which is fairly reliable and reflects the trend of the epidemic as compared to the HIV infection rate recorded. The latest results generated show a slowdown in the increase of the number of estimated infected HIV+ cases in the late 1990s and this trend is supported by a slowdown in the increase of the number of AIDS cases recorded.
    Matched MeSH terms: Data Interpretation, Statistical
  18. Norsa'adah B
    Med J Malaysia, 2004 Dec;59(5):692; author reply 693-5.
    PMID: 15889579
    Matched MeSH terms: Data Interpretation, Statistical*
  19. Noor NM, Yunus A, Bakar SA, Hussin A, Rijal OM
    Comput Med Imaging Graph, 2011 Apr;35(3):186-94.
    PMID: 21036539 DOI: 10.1016/j.compmedimag.2010.10.002
    This paper investigates a novel statistical discrimination procedure to detect PTB when the gold standard requirement is taken into consideration. Archived data were used to establish two groups of patients which are the control and test group. The control group was used to develop the statistical discrimination procedure using four vectors of wavelet coefficients as feature vectors for the detection of pulmonary tuberculosis (PTB), lung cancer (LC), and normal lung (NL). This discrimination procedure was investigated using the test group where the number of sputum positive and sputum negative cases that were correctly classified as PTB cases were noted. The proposed statistical discrimination method is able to detect PTB patients and LC with high true positive fraction. The method is also able to detect PTB patients that are sputum negative and therefore may be used as a complement to the gold standard.
    Matched MeSH terms: Data Interpretation, Statistical*
  20. Noh, C.H.C., Azmin, N.F.M., Amid, A., Asnawi, A.L.
    MyJurnal
    Bioactive compounds are one of the natural products used especially for medicinal, pharmaceutical and food application. Increasing research performed on the extraction, isolation and identification of bioactive compounds, however non to date has explored on the identification of flavonoids classes. Therefore, this study was focused on the development of algorithm for rapid identification of flavonoids classes which are flavanone, flavone and flavonol and also their derivatives. Fourier Transform Infrared (FTIR) spectroscopy coupled with multivariate statistical data analysis, which is Principal Component Analysis (PCA) was utilized. The results exhibited that few significant wavenumber range provides the identification and characterization of the flavonoids classes based on PCA algorithm. The study concluded that FTIR coupled with PCA analysis can be used as a molecular fingerprint for rapid identification of flavonoids.
    Matched MeSH terms: Data Interpretation, Statistical
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links