Displaying publications 1 - 20 of 113 in total

Abstract:
Sort:
  1. Arif AS, Mansor S, Logeswaran R, Karim HA
    J Med Syst, 2015 Feb;39(2):5.
    PMID: 25628161 DOI: 10.1007/s10916-015-0200-z
    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  2. Sim KS, Ting HY, Lai MA, Tso CP
    J Microsc, 2009 Jun;234(3):243-50.
    PMID: 19493101 DOI: 10.1111/j.1365-2818.2009.03167.x
    An improvement to the previously proposed Canny optimization technique for scanning electron microscope image colorization is reported. The additional process is adaptive tuning, where colour tuning is performed adaptively, based on comparing the original luminance values with calculated luminance values. The complete adaptive Canny optimization technique gives significantly better mechanical contrast on scanning electron microscope grey-scale images than do existing methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Sim KS, Thong LW, Ting HY, Tso CP
    J Microsc, 2010 Feb;237(2):111-8.
    PMID: 20096041 DOI: 10.1111/j.1365-2818.2009.03325.x
    Interpolation techniques that are used for image magnification to obtain more useful details of the surface such as morphology and mechanical contrast usually rely on the signal information distributed around edges and areas of sharp changes and these signal information can also be used to predict missing details from the sample image. However, many of these interpolation methods tend to smooth or blur out image details around the edges. In the present study, a Lagrange time delay estimation interpolator method is proposed and this method only requires a small filter order and has no noticeable estimation bias. Comparing results with the original scanning electron microscope magnification and results of various other interpolation methods, the Lagrange time delay estimation interpolator is found to be more efficient, more robust and easier to execute.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Sudarshan VK, Mookiah MR, Acharya UR, Chandran V, Molinari F, Fujita H, et al.
    Comput Biol Med, 2016 Feb 1;69:97-111.
    PMID: 26761591 DOI: 10.1016/j.compbiomed.2015.12.006
    Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Chong JWR, Khoo KS, Chew KW, Vo DN, Balakrishnan D, Banat F, et al.
    Bioresour Technol, 2023 Feb;369:128418.
    PMID: 36470491 DOI: 10.1016/j.biortech.2022.128418
    The identification of microalgae species is an important tool in scientific research and commercial application to prevent harmful algae blooms (HABs) and recognizing potential microalgae strains for the bioaccumulation of valuable bioactive ingredients. The aim of this study is to incorporate rapid, high-accuracy, reliable, low-cost, simple, and state-of-the-art identification methods. Thus, increasing the possibility for the development of potential recognition applications, that could identify toxic-producing and valuable microalgae strains. Recently, deep learning (DL) has brought the study of microalgae species identification to a much higher depth of efficiency and accuracy. In doing so, this review paper emphasizes the significance of microalgae identification, and various forms of machine learning algorithms for image classification, followed by image pre-processing techniques, feature extraction, and selection for further classification accuracy. Future prospects over the challenges and improvements of potential DL classification model development, application in microalgae recognition, and image capturing technologies are discussed accordingly.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  6. Chong JWR, Khoo KS, Chew KW, Ting HY, Show PL
    Biotechnol Adv, 2023;63:108095.
    PMID: 36608745 DOI: 10.1016/j.biotechadv.2023.108095
    Identification of microalgae species is of importance due to the uprising of harmful algae blooms affecting both the aquatic habitat and human health. Despite this occurence, microalgae have been identified as a green biomass and alternative source due to its promising bioactive compounds accumulation that play a significant role in many industrial applications. Recently, microalgae species identification has been conducted through DNA analysis and various microscopy techniques such as light, scanning electron, transmission electron, and atomic force -microscopy. The aforementioned procedures have encouraged researchers to consider alternate ways due to limitations such as costly validation, requiring skilled taxonomists, prolonged analysis, and low accuracy. This review highlights the potential innovations in digital microscopy with the incorporation of both hardware and software that can produce a reliable recognition, detection, enumeration, and real-time acquisition of microalgae species. Several steps such as image acquisition, processing, feature extraction, and selection are discussed, for the purpose of generating high image quality by removing unwanted artifacts and noise from the background. These steps of identification of microalgae species is performed by reliable image classification through machine learning as well as deep learning algorithms such as artificial neural networks, support vector machines, and convolutional neural networks. Overall, this review provides comprehensive insights into numerous possibilities of microalgae image identification, image pre-processing, and machine learning techniques to address the challenges in developing a robust digital classification tool for the future.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  7. Abisha S, Mutawa AM, Murugappan M, Krishnan S
    PLoS One, 2023;18(4):e0284021.
    PMID: 37018344 DOI: 10.1371/journal.pone.0284021
    Different diseases are observed in vegetables, fruits, cereals, and commercial crops by farmers and agricultural experts. Nonetheless, this evaluation process is time-consuming, and initial symptoms are primarily visible at microscopic levels, limiting the possibility of an accurate diagnosis. This paper proposes an innovative method for identifying and classifying infected brinjal leaves using Deep Convolutional Neural Networks (DCNN) and Radial Basis Feed Forward Neural Networks (RBFNN). We collected 1100 images of brinjal leaf disease that were caused by five different species (Pseudomonas solanacearum, Cercospora solani, Alternaria melongenea, Pythium aphanidermatum, and Tobacco Mosaic Virus) and 400 images of healthy leaves from India's agricultural form. First, the original plant leaf is preprocessed by a Gaussian filter to reduce the noise and improve the quality of the image through image enhancement. A segmentation method based on expectation and maximization (EM) is then utilized to segment the leaf's-diseased regions. Next, the discrete Shearlet transform is used to extract the main features of the images such as texture, color, and structure, which are then merged to produce vectors. Lastly, DCNN and RBFNN are used to classify brinjal leaves based on their disease types. The DCNN achieved a mean accuracy of 93.30% (with fusion) and 76.70% (without fusion) compared to the RBFNN (82%-without fusion, 87%-with fusion) in classifying leaf diseases.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  8. Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH
    Comput Biol Med, 2023 Feb;153:106553.
    PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553
    Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  9. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Liu H, Huang J, Li Q, Guan X, Tseng M
    Artif Intell Med, 2024 Feb;148:102776.
    PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776
    This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Khan MB, Lee XY, Nisar H, Ng CA, Yeap KH, Malik AS
    Adv Exp Med Biol, 2015;823:227-48.
    PMID: 25381111 DOI: 10.1007/978-3-319-10984-8_13
    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Jusman Y, Ng SC, Abu Osman NA
    ScientificWorldJournal, 2014;2014:810368.
    PMID: 24955419 DOI: 10.1155/2014/810368
    Advent of medical image digitalization leads to image processing and computer-aided diagnosis systems in numerous clinical applications. These technologies could be used to automatically diagnose patient or serve as second opinion to pathologists. This paper briefly reviews cervical screening techniques, advantages, and disadvantages. The digital data of the screening techniques are used as data for the computer screening system as replaced in the expert analysis. Four stages of the computer system are enhancement, features extraction, feature selection, and classification reviewed in detail. The computer system based on cytology data and electromagnetic spectra data achieved better accuracy than other data.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Arigbabu OA, Ahmad SM, Adnan WA, Yussof S, Iranmanesh V, Malallah FL
    ScientificWorldJournal, 2014;2014:460973.
    PMID: 25121120 DOI: 10.1155/2014/460973
    Soft biometrics can be used as a prescreening filter, either by using single trait or by combining several traits to aid the performance of recognition systems in an unobtrusive way. In many practical visual surveillance scenarios, facial information becomes difficult to be effectively constructed due to several varying challenges. However, from distance the visual appearance of an object can be efficiently inferred, thereby providing the possibility of estimating body related information. This paper presents an approach for estimating body related soft biometrics; specifically we propose a new approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate. Our evaluation on 1120 frame sets of 80 subjects from a newly compiled dataset shows that the mentioned soft biometric information of human subjects can be adequately predicted from set of frames.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Sim KS, Tan YY, Lai MA, Tso CP, Lim WK
    J Microsc, 2010 Apr 1;238(1):44-56.
    PMID: 20384837 DOI: 10.1111/j.1365-2818.2009.03328.x
    An exponential contrast stretching (ECS) technique is developed to reduce the charging effects on scanning electron microscope images. Compared to some of the conventional histogram equalization methods, such as bi-histogram equalization and recursive mean-separate histogram equalization, the proposed ECS method yields better image compensation. Diode sample chips with insulating and conductive surfaces are used as test samples to evaluate the efficiency of the developed algorithm. The algorithm is implemented in software with a frame grabber card, forming the front-end video capture element.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Noor NM, Rijal OM, Yunus A, Abu-Bakar SA
    Comput Med Imaging Graph, 2010 Mar;34(2):160-6.
    PMID: 19758785 DOI: 10.1016/j.compmedimag.2009.08.005
    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Reza AW, Eswaran C, Hati S
    J Med Syst, 2008 Apr;32(2):147-55.
    PMID: 18461818
    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Logeswaran R, Eswaran C
    J Med Syst, 2006 Apr;30(2):133-8.
    PMID: 16705998
    Many medical examinations involve acquisition of a large series of slice images for 3D reconstruction of the organ of interest. With the paperless hospital concept and telemedicine, there is very heavy utilization of limited electronic storage and transmission bandwidth. This paper proposes model-based compression to reduce the load on such resources, as well as aid diagnosis through the 3D reconstruction of the structures of interest, for images acquired by various modalities, such as MRI, Ultrasound, CT, PET etc. and stored in the DICOM file format. An example implementation for the biliary track in MRCP images is illustrated in the paper. Significant compression gains may be derived from the proposed method, and a suitable mixture of the models and raw images would enhance the patient medical history archives as the models may be stored in the DICOM file format used in most medical archiving systems.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Abu A, Susan LL, Sidhu AS, Dhillon SK
    BMC Bioinformatics, 2013;14:48.
    PMID: 23398696 DOI: 10.1186/1471-2105-14-48
    Digitised monogenean images are usually stored in file system directories in an unstructured manner. In this paper we propose a semantic representation of these images in the form of a Monogenean Haptoral Bar Image (MHBI) ontology, which are annotated with taxonomic classification, diagnostic hard part and image properties. The data we used are basically of the monogenean species found in fish, thus we built a simple Fish ontology to demonstrate how the host (fish) ontology can be linked to the MHBI ontology. This will enable linking of information from the monogenean ontology to the host species found in the fish ontology without changing the underlying schema for either of the ontologies.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links