Displaying publications 1 - 20 of 313 in total

Abstract:
Sort:
  1. Abdar M, Książek W, Acharya UR, Tan RS, Makarenkov V, Pławiak P
    Comput Methods Programs Biomed, 2019 Oct;179:104992.
    PMID: 31443858 DOI: 10.1016/j.cmpb.2019.104992
    BACKGROUND AND OBJECTIVE: Coronary artery disease (CAD) is one of the commonest diseases around the world. An early and accurate diagnosis of CAD allows a timely administration of appropriate treatment and helps to reduce the mortality. Herein, we describe an innovative machine learning methodology that enables an accurate detection of CAD and apply it to data collected from Iranian patients.

    METHODS: We first tested ten traditional machine learning algorithms, and then the three-best performing algorithms (three types of SVM) were used in the rest of the study. To improve the performance of these algorithms, a data preprocessing with normalization was carried out. Moreover, a genetic algorithm and particle swarm optimization, coupled with stratified 10-fold cross-validation, were used twice: for optimization of classifier parameters and for parallel selection of features.

    RESULTS: The presented approach enhanced the performance of all traditional machine learning algorithms used in this study. We also introduced a new optimization technique called N2Genetic optimizer (a new genetic training). Our experiments demonstrated that N2Genetic-nuSVM provided the accuracy of 93.08% and F1-score of 91.51% when predicting CAD outcomes among the patients included in a well-known Z-Alizadeh Sani dataset. These results are competitive and comparable to the best results in the field.

    CONCLUSIONS: We showed that machine-learning techniques optimized by the proposed approach, can lead to highly accurate models intended for both clinical and research use.

    Matched MeSH terms: Databases, Factual/statistics & numerical data
  2. Abdo A, Salim N
    J Chem Inf Model, 2011 Jan 24;51(1):25-32.
    PMID: 21155550 DOI: 10.1021/ci100232h
    Many of the conventional similarity methods assume that molecular fragments that do not relate to biological activity carry the same weight as the important ones. One possible approach to this problem is to use the Bayesian inference network (BIN), which models molecules and reference structures as probabilistic inference networks. The relationships between molecules and reference structures in the Bayesian network are encoded using a set of conditional probability distributions, which can be estimated by the fragment weighting function, a function of the frequencies of the fragments in the molecule or the reference structure as well as throughout the collection. The weighting function combines one or more fragment weighting schemes. In this paper, we have investigated five different weighting functions and present a new fragment weighting scheme. Later on, these functions were modified to combine the new weighting scheme. Simulated virtual screening experiments with the MDL Drug Data Report (23) and maximum unbiased validation data sets show that the use of new weighting scheme can provide significantly more effective screening when compared with the use of current weighting schemes.
    Matched MeSH terms: Databases, Factual
  3. Abdul Hamid M
    Med J Malaysia, 2008 Sep;63 Suppl C:vii.
    PMID: 19227668
    Matched MeSH terms: Databases, Factual*
  4. Abdul-Kadir NA, Mat Safri N, Othman MA
    Int J Cardiol, 2016 Nov 01;222:504-8.
    PMID: 27505342 DOI: 10.1016/j.ijcard.2016.07.196
    BACKGROUND: The feasibility study of the natural frequency (ω) obtained from a second-order dynamic system applied to an ECG signal was discovered recently. The heart rate for different ECG signals generates different ω values. The heart rate variability (HRV) and autonomic nervous system (ANS) have an association to represent cardiovascular variations for each individual. This study further analyzed the ω for different ECG signals with HRV for atrial fibrillation classification.

    METHODS: This study used the MIT-BIH Normal Sinus Rhythm (nsrdb) and MIT-BIH Atrial Fibrillation (afdb) databases for healthy human (NSR) and atrial fibrillation patient (N and AF) ECG signals, respectively. The extraction of features was based on the dynamic system concept to determine the ω of the ECG signals. There were 35,031 samples used for classification.

    RESULTS: There were significant differences between the N & NSR, N & AF, and NSR & AF groups as determined by the statistical t-test (p<0.0001). There was a linear separation at 0.4s(-1) for ω of both databases upon using the thresholding method. The feature ω for afdb and nsrdb falls within the high frequency (HF) and above the HF band, respectively. The feature classification between the nsrdb and afdb ECG signals was 96.53% accurate.

    CONCLUSIONS: This study found that features of the ω of atrial fibrillation patients and healthy humans were associated with the frequency analysis of the ANS during parasympathetic activity. The feature ω is significant for different databases, and the classification between afdb and nsrdb was determined.

    Matched MeSH terms: Databases, Factual/classification*
  5. Abdulhussain SH, Ramli AR, Saripan MI, Mahmmod BM, Al-Haddad SAR, Jassim WA
    Entropy (Basel), 2018 Mar 23;20(4).
    PMID: 33265305 DOI: 10.3390/e20040214
    The recent increase in the number of videos available in cyberspace is due to the availability of multimedia devices, highly developed communication technologies, and low-cost storage devices. These videos are simply stored in databases through text annotation. Content-based video browsing and retrieval are inefficient due to the method used to store videos in databases. Video databases are large in size and contain voluminous information, and these characteristics emphasize the need for automated video structure analyses. Shot boundary detection (SBD) is considered a substantial process of video browsing and retrieval. SBD aims to detect transition and their boundaries between consecutive shots; hence, shots with rich information are used in the content-based video indexing and retrieval. This paper presents a review of an extensive set for SBD approaches and their development. The advantages and disadvantages of each approach are comprehensively explored. The developed algorithms are discussed, and challenges and recommendations are presented.
    Matched MeSH terms: Databases, Factual
  6. Abousaeidi M, Fauzi R, Muhamad R
    J Environ Biol, 2016 09;37(5 Spec No):1167-1176.
    PMID: 29989749
    Perishable products must be transported quickly from its production area to the markets due to the climatic conditions of Malaysia. Deterioration of fresh produce is affected by temperature and delivery time. The cost to achieve such timely delivery of perishable food can affect the revenue of suppliers and retailers. Choosing an efficient delivery route at right time can reduce the total transportation cost. However, insufficient attention has been given to transportation issues with regards to fresh food delivery of greater Kuala Lumpur. The present study involves adoption of the Geographic Information System (GIS) modelling approach to determine the fastest delivery routes for fresh products to several hypermarkets. For this purpose, ArcGIS software was adopted for solving the problem of a complex road networks. With a goal of realizing the shortest time for delivery route planning, impedance function would be integrated by taking into account the time emphasized in the study. The main findings of this study include determination of efficient routes for delivery of fresh vegetables based on minimal drive time. It has been proposed that the fastest route model for delivery of fresh products is based on comparing two time frames within a day. The final output of this research was a map of quickest routes with best delivery time based on two time frames.
    Matched MeSH terms: Databases, Factual
  7. Abu A, Leow LK, Ramli R, Omar H
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):505.
    PMID: 28155645 DOI: 10.1186/s12859-016-1362-5
    BACKGROUND: Taxonomists frequently identify specimen from various populations based on the morphological characteristics and molecular data. This study looks into another invasive process in identification of house shrew (Suncus murinus) using image analysis and machine learning approaches. Thus, an automated identification system is developed to assist and simplify this task. In this study, seven descriptors namely area, convex area, major axis length, minor axis length, perimeter, equivalent diameter and extent which are based on the shape are used as features to represent digital image of skull that consists of dorsal, lateral and jaw views for each specimen. An Artificial Neural Network (ANN) is used as classifier to classify the skulls of S. murinus based on region (northern and southern populations of Peninsular Malaysia) and sex (adult male and female). Thus, specimen classification using Training data set and identification using Testing data set were performed through two stages of ANNs.

    RESULTS: At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community.

    CONCLUSIONS: This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.

    Matched MeSH terms: Databases, Factual
  8. Abumalloh RA, Nilashi M, Samad S, Ahmadi H, Alghamdi A, Alrizq M, et al.
    Ageing Res Rev, 2024 Apr;96:102285.
    PMID: 38554785 DOI: 10.1016/j.arr.2024.102285
    Parkinson's Disease (PD) is a progressive neurodegenerative illness triggered by decreased dopamine secretion. Deep Learning (DL) has gained substantial attention in PD diagnosis research, with an increase in the number of published papers in this discipline. PD detection using DL has presented more promising outcomes as compared with common machine learning approaches. This article aims to conduct a bibliometric analysis and a literature review focusing on the prominent developments taking place in this area. To achieve the target of the study, we retrieved and analyzed the available research papers in the Scopus database. Following that, we conducted a bibliometric analysis to inspect the structure of keywords, authors, and countries in the surveyed studies by providing visual representations of the bibliometric data using VOSviewer software. The study also provides an in-depth review of the literature focusing on different indicators of PD, deployed approaches, and performance metrics. The outcomes indicate the firm development of PD diagnosis using DL approaches over time and a large diversity of studies worldwide. Additionally, the literature review presented a research gap in DL approaches related to incremental learning, particularly in relation to big data analysis.
    Matched MeSH terms: Databases, Factual
  9. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adam M, Gertych A, et al.
    Comput Biol Med, 2017 10 01;89:389-396.
    PMID: 28869899 DOI: 10.1016/j.compbiomed.2017.08.022
    The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats.
    Matched MeSH terms: Databases, Factual*
  10. Addepalli P, Sawangsri W, Ghani SAC
    Injury, 2024 Apr;55(4):111458.
    PMID: 38432100 DOI: 10.1016/j.injury.2024.111458
    This study undertakes a Scientometric analysis of bone-cutting tools, investigating a corpus of 735 papers from the Scopus database between 1941 and 2023. It employs bibliometric methodologies such as keyword coupling, co-citation, and co-authorship analysis to map the intellectual landscape and collaborative networks within this research domain. The analysis highlights a growing interest and significant advancements in bone-cutting tools, focusing on their design, the materials used, and the cutting processes involved. It identifies key research fronts and trends, such as the emphasis on surgical precision, material innovation, and the optimization of tool performance. Further, the study reveals a broad collaboration among researchers from various disciplines, including engineering, materials science, and medical sciences, reflecting the field's interdisciplinary nature. Despite the progress, the analysis points out several gaps, notably in tool design optimization and the impact of materials on bone health. This comprehensive review not only charts the evolution of bone-cutting tool research but also calls attention to areas requiring further investigation, aiming to inspire future studies that address these identified gaps and enhance surgical outcomes.
    Matched MeSH terms: Databases, Factual
  11. Afolabi LT, Saeed F, Hashim H, Petinrin OO
    PLoS One, 2018;13(1):e0189538.
    PMID: 29329334 DOI: 10.1371/journal.pone.0189538
    Pharmacologically active molecules can provide remedies for a range of different illnesses and infections. Therefore, the search for such bioactive molecules has been an enduring mission. As such, there is a need to employ a more suitable, reliable, and robust classification method for enhancing the prediction of the existence of new bioactive molecules. In this paper, we adopt a recently developed combination of different boosting methods (Adaboost) for the prediction of new bioactive molecules. We conducted the research experiments utilizing the widely used MDL Drug Data Report (MDDR) database. The proposed boosting method generated better results than other machine learning methods. This finding suggests that the method is suitable for inclusion among the in silico tools for use in cheminformatics, computational chemistry and molecular biology.
    Matched MeSH terms: Databases, Factual
  12. Agbolade O, Nazri A, Yaakob R, Ghani AAA, Cheah YK
    PeerJ Comput Sci, 2020;6:e249.
    PMID: 33816901 DOI: 10.7717/peerj-cs.249
    Over the years, neuroscientists and psychophysicists have been asking whether data acquisition for facial analysis should be performed holistically or with local feature analysis. This has led to various advanced methods of face recognition being proposed, and especially techniques using facial landmarks. The current facial landmark methods in 3D involve a mathematically complex and time-consuming workflow involving semi-landmark sliding tasks. This paper proposes a homologous multi-point warping for 3D facial landmarking, which is verified experimentally on each of the target objects in a given dataset using 500 landmarks (16 anatomical fixed points and 484 sliding semi-landmarks). This is achieved by building a template mesh as a reference object and applying this template to each of the targets in three datasets using an artificial deformation approach. The semi-landmarks are subjected to sliding along tangents to the curves or surfaces until the bending energy between a template and a target form is minimal. The results indicate that our method can be used to investigate shape variation for multiple datasets when implemented on three databases (Stirling, FRGC and Bosphorus).
    Matched MeSH terms: Databases, Factual
  13. Ahmad P, Dummer PMH, Noorani TY, Asif JA
    Int Endod J, 2019 Jun;52(6):803-818.
    PMID: 30667524 DOI: 10.1111/iej.13083
    AIM: To analyse the main characteristics of the top 50 most-cited articles published in the International Endodontic Journal from 1967 to 2018.

    METHODOLOGY: The Clarivate Analytics' Web of Science 'All Databases', Elsevier's Scopus, Google Scholar and PubMed Central were searched to retrieve the 50 most-cited articles in the IEJ published from April 1967 to December 2018. The articles were analysed and information including number of citations, year of publication, contributing authors, institutions and countries, study design, study topic, impact factor and keywords was extracted.

    RESULTS: The number of citations of the 50 selected papers varied from 575 to 130 (Web of Science), 656 to164 (Elsevier's Scopus), 1354 to 199 (Google Scholar) and 123 to 3 (PubMed). The majority of papers were published in the year 2001 (n = 7). Amongst 102 authors, the greatest contribution was made by four contributors that included Gulabivala K (n = 4), Ng YL (n = 4), Pitt Ford TR (n = 4) and Wesselink PR (n = 4). The majority of papers originated from the United Kingdom (n = 8) with most contributions from King's College London Dental Institute (UK) and Eastman Dental Hospital, London. Reviews were the most common study design (n = 19) followed by Clinical Research (n = 16) and Basic Research (n = 15). The majority of topics covered by the most-cited articles were Outcome Studies (n = 9), Intracanal medicaments (n = 8), Endodontic microbiology (n = 7) and Canal instrumentation (n = 7). Amongst 76 unique keywords, Endodontics (n = 7), Mineral Trioxide Aggregate (MTA) (n = 7) and Root Canal Treatment (n = 7) were the most frequently used.

    CONCLUSION: This is the first study to identify and analyse the top 50 most-cited articles in a specific professional journal within Dentistry. The analysis has revealed information regarding the development of the IEJ over time as well as scientific progress in the field of Endodontology.

    Matched MeSH terms: Databases, Factual
  14. Ahmad P, Elgamal HAM
    J Endod, 2020 Aug;46(8):1042-1051.
    PMID: 32417289 DOI: 10.1016/j.joen.2020.04.014
    INTRODUCTION: Bibliometric analysis is the quantitative measure of the impact of a scientific article in its respective field of research. The aim of this study was to identify and analyze the main features of the top 50 most cited articles published in the Journal of Endodontics since its inception as well as the top 50 most downloaded articles in 2017 and 2018 in order to evaluate the changing trends and other bibliometric parameters of the contemporary literature compared with the classic literature.

    METHODS: An electronic search was conducted on the Clarivate Analytics Web of Science "All Databases" to identify and analyze the top 50 most frequently cited scientific articles. After ranking the articles in a descending order based on their citation counts, each article was then crossmatched with the citation counts in Scopus, Google Scholar, and PubMed.

    RESULTS: The citation counts of the 50 selected most cited articles ranged between 218 and 731 (Clarivate Analytics Web of Science). The years in which most top 50 articles were published were 2004 and 2008 (n = 5). Among 131 authors, the greatest contribution was made by M. Torabinejad (n = 14). Most of the articles originated from the United States (n = 38) with the greatest contributions from the School of Dentistry, Loma Linda University, Loma Linda, CA (n = 15). Basic research-technology was the most frequent study design (n = 18). A negative, significant correlation occurred between citation density and publication age (correlation coefficient = -0.708, P < .01).

    CONCLUSIONS: Several interesting differences were found between the main characteristics of the most cited articles and the most downloaded articles.

    Matched MeSH terms: Databases, Factual
  15. Ahmad WA, Ali RM, Khanom M, Han CK, Bang LH, Yip AF, et al.
    Int J Cardiol, 2013 Apr 30;165(1):161-4.
    PMID: 21920614 DOI: 10.1016/j.ijcard.2011.08.015
    The Malaysian National Cardiovascular Disease Database (NCVD) team presents Percutaneous Coronary Intervention (PCI) Registry report for the year 2007 to 2009. It provides comprehensive information regarding practice and outcome of PCI in Malaysia.
    Matched MeSH terms: Databases, Factual/trends*
  16. Ahmed A, Saeed F, Salim N, Abdo A
    J Cheminform, 2014;6:19.
    PMID: 24883114 DOI: 10.1186/1758-2946-6-19
    BACKGROUND: It is known that any individual similarity measure will not always give the best recall of active molecule structure for all types of activity classes. Recently, the effectiveness of ligand-based virtual screening approaches can be enhanced by using data fusion. Data fusion can be implemented using two different approaches: group fusion and similarity fusion. Similarity fusion involves searching using multiple similarity measures. The similarity scores, or ranking, for each similarity measure are combined to obtain the final ranking of the compounds in the database.

    RESULTS: The Condorcet fusion method was examined. This approach combines the outputs of similarity searches from eleven association and distance similarity coefficients, and then the winner measure for each class of molecules, based on Condorcet fusion, was chosen to be the best method of searching. The recall of retrieved active molecules at top 5% and significant test are used to evaluate our proposed method. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints.

    CONCLUSIONS: Simulated virtual screening experiments with the standard two data sets show that the use of Condorcet fusion provides a very simple way of improving the ligand-based virtual screening, especially when the active molecules being sought have a lowest degree of structural heterogeneity. However, the effectiveness of the Condorcet fusion was increased slightly when structural sets of high diversity activities were being sought.

    Matched MeSH terms: Databases, Factual
  17. Ahmed A, Abdo A, Salim N
    ScientificWorldJournal, 2012;2012:410914.
    PMID: 22623895 DOI: 10.1100/2012/410914
    Many of the similarity-based virtual screening approaches assume that molecular fragments that are not related to the biological activity carry the same weight as the important ones. This was the reason that led to the use of Bayesian networks as an alternative to existing tools for similarity-based virtual screening. In our recent work, the retrieval performance of the Bayesian inference network (BIN) was observed to improve significantly when molecular fragments were reweighted using the relevance feedback information. In this paper, a set of active reference structures were used to reweight the fragments in the reference structure. In this approach, higher weights were assigned to those fragments that occur more frequently in the set of active reference structures while others were penalized. Simulated virtual screening experiments with MDL Drug Data Report datasets showed that the proposed approach significantly improved the retrieval effectiveness of ligand-based virtual screening, especially when the active molecules being sought had a high degree of structural heterogeneity.
    Matched MeSH terms: Databases, Factual
  18. Ajay R, JafarAbdulla MU, Sivakumar JS, Baburajan K, Rakshagan V, Eyeswarya J
    J Contemp Dent Pract, 2023 Aug 01;24(8):521-544.
    PMID: 38193174 DOI: 10.5005/jp-journals-10024-3514
    AIM: The present systematic review aimed to report the studies concerning the primers in improving bond strength and identifying pertinent primers for a particular dental alloy by adhering to PRISMA precepts.

    MATERIALS AND METHODS: PubMed and Semantic Scholar databases were scoured for articles using 10 search terms. In vitro studies satisfying the inclusion criteria were probed which were meticulously screened and scrutinized for eligibility adhering to the 11 exclusion criteria. The quality assessment tool for in vitro studies (QUIN Tool) containing 12 criteria was employed to assess the risk of bias (RoB).

    RESULTS: A total of 48 studies assessing shear bond strength (SBS) and 15 studies evaluating tensile bond strength (TBS) were included in the qualitative synthesis. Concerning SBS, 33.4% moderate and 66.6% high RoB was observed. Concerning TBS, 26.8% moderate and 73.2% high RoB was discerned. Seventeen and two studies assessing SBS and TBS, respectively, were included in meta-analyses.

    CONCLUSIONS: Shear bond strength and TBS increased for the primed alloys. Cyclic disulfide primer is best-suited for noble alloys when compared with thiol/thione primers. Phosphoric acid- and phosphonic acid ester-based primers are opportune for base alloys.

    CLINICAL SIGNIFICANCE: The alloy-resin interface (ARI) would fail if an inappropriate primer was selected. Therefore, the selection of an appropriate alloy adhesive primer for an alloy plays a crucial role in prosthetic success. This systematic review would help in the identification and selection of a congruous primer for a selected alloy.

    Matched MeSH terms: Databases, Factual
  19. Al-Ahdal WM, Farhan NHS, Vishwakarma R, Hashim HA
    Environ Sci Pollut Res Int, 2023 Aug;30(36):85803-85821.
    PMID: 37393591 DOI: 10.1007/s11356-023-28499-5
    The study proposes to examine how environmental, social and governance disclosure (ESG) affect the financial performance (FP) of Indian firms. Furthermore, it aims to evaluate the moderation impact of CEO power (CEOP) on the association between ESG on the FP. The study's target population is all firms indexed in NIFTY 100, representing the top one hundred firms by market capitalisation from 2017 to 2021. Data relating to ESG were collected and built based on the available data on Refinitiv Eikon Database. Results reveal that EDI positively and significantly impacts the ROE and TQ of Indian firms. Furthermore, SDI and GDI negatively and significantly affect the ROE and TQ of Indian firms. Moreover, ESG and CEOP have a significant impact on ROE. Nevertheless, ESG has a negative but highly significant impact on ROE, whilst it has a negative and low considerable impact on the TQ of Indian firms. Nonetheless, CEOP does not moderate the association between ESG and FP measured by ROE and TQ. This research contributes to the existing literature by introducing a moderator variable that has not been used in the Indian context; CEO power, which provides stakeholders and regulators with useful findings that would encourage firms to create an ESG committee to enhance ESG disclosure to compete on the world market and reach the United Nations (UN) Sustainable goal 2030. Furthermore, this paper provides insightful recommendations for creating an ESG legal framework for decision-makers.
    Matched MeSH terms: Databases, Factual
  20. Al-Dhaqm A, Razak S, Othman SH, Ngadi A, Ahmed MN, Ali Mohammed A
    PLoS One, 2017;12(2):e0170793.
    PMID: 28146585 DOI: 10.1371/journal.pone.0170793
    Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent.
    Matched MeSH terms: Databases, Factual*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links