Displaying all 7 publications

Abstract:
Sort:
  1. Muniyandi RC, Zin AM
    Pak J Biol Sci, 2011 Dec 15;14(24):1100-8.
    PMID: 22335049
    Ligand-Receptor Networks of TGF-beta plays essential role in transmitting a wide range of extracellular signals that affect many cellular processes such as cell growth. However, the modeling of these networks with conventional approach such as ordinary differential equations has not taken into account, the spatial structure and stochastic behavior of processes involve in these networks. Membrane computing as the alternatives approach provides spatial structure for molecular computation in which processes are evaluated in a non-deterministic and maximally parallel way. This study is carried out to evaluate the membrane computing model of Ligand-Receptor Networks of TGF-beta with model checking approach. The results show that membrane computing model has sustained the behaviors and properties of Ligand-Receptor Networks of TGF-beta. This reinforce that membrane computing is capable in analyzing processes and behaviors in hierarchical structure of cell such as Ligand-Receptor Networks of TGF-beta better than the deterministic approach of conventional mathematical models.
  2. Muniyandi RC, Zin AM, Sanders JW
    Biosystems, 2013 Dec;114(3):219-26.
    PMID: 24120990 DOI: 10.1016/j.biosystems.2013.09.008
    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular.
  3. Usman OL, Muniyandi RC, Omar K, Mohamad M
    PLoS One, 2021;16(2):e0245579.
    PMID: 33630876 DOI: 10.1371/journal.pone.0245579
    Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia's neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels' intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.
  4. Rahman MA, Muniyandi RC, Albashish D, Rahman MM, Usman OL
    PeerJ Comput Sci, 2021;7:e344.
    PMID: 33816995 DOI: 10.7717/peerj-cs.344
    Artificial neural networks (ANN) perform well in real-world classification problems. In this paper, a robust classification model using ANN was constructed to enhance the accuracy of breast cancer classification. The Taguchi method was used to determine the suitable number of neurons in a single hidden layer of the ANN. The selection of a suitable number of neurons helps to solve the overfitting problem by affecting the classification performance of an ANN. With this, a robust classification model was then built for breast cancer classification. Based on the Taguchi method results, the suitable number of neurons selected for the hidden layer in this study is 15, which was used for the training of the proposed ANN model. The developed model was benchmarked upon the Wisconsin Diagnostic Breast Cancer Dataset, popularly known as the UCI dataset. Finally, the proposed model was compared with seven other existing classification models, and it was confirmed that the model in this study had the best accuracy at breast cancer classification, at 98.8%. This confirmed that the proposed model significantly improved performance.
  5. Rahman MM, Usman OL, Muniyandi RC, Sahran S, Mohamed S, Razak RA
    Brain Sci, 2020 Dec 07;10(12).
    PMID: 33297436 DOI: 10.3390/brainsci10120949
    Autism Spectrum Disorder (ASD), according to DSM-5 in the American Psychiatric Association, is a neurodevelopmental disorder that includes deficits of social communication and social interaction with the presence of restricted and repetitive behaviors. Children with ASD have difficulties in joint attention and social reciprocity, using non-verbal and verbal behavior for communication. Due to these deficits, children with autism are often socially isolated. Researchers have emphasized the importance of early identification and early intervention to improve the level of functioning in language, communication, and well-being of children with autism. However, due to limited local assessment tools to diagnose these children, limited speech-language therapy services in rural areas, etc., these children do not get the rehabilitation they need until they get into compulsory schooling at the age of seven years old. Hence, efficient approaches towards early identification and intervention through speedy diagnostic procedures for ASD are required. In recent years, advanced technologies like machine learning have been used to analyze and investigate ASD to improve diagnostic accuracy, time, and quality without complexity. These machine learning methods include artificial neural networks, support vector machines, a priori algorithms, and decision trees, most of which have been applied to datasets connected with autism to construct predictive models. Meanwhile, the selection of features remains an essential task before developing a predictive model for ASD classification. This review mainly investigates and analyzes up-to-date studies on machine learning methods for feature selection and classification of ASD. We recommend methods to enhance machine learning's speedy execution for processing complex data for conceptualization and implementation in ASD diagnostic research. This study can significantly benefit future research in autism using a machine learning approach for feature selection, classification, and processing imbalanced data.
  6. Al-Jumaili AHA, Muniyandi RC, Hasan MK, Paw JKS, Singh MJ
    Sensors (Basel), 2023 Mar 08;23(6).
    PMID: 36991663 DOI: 10.3390/s23062952
    Traditional parallel computing for power management systems has prime challenges such as execution time, computational complexity, and efficiency like process time and delays in power system condition monitoring, particularly consumer power consumption, weather data, and power generation for detecting and predicting data mining in the centralized parallel processing and diagnosis. Due to these constraints, data management has become a critical research consideration and bottleneck. To cope with these constraints, cloud computing-based methodologies have been introduced for managing data efficiently in power management systems. This paper reviews the concept of cloud computing architecture that can meet the multi-level real-time requirements to improve monitoring and performance which is designed for different application scenarios for power system monitoring. Then, cloud computing solutions are discussed under the background of big data, and emerging parallel programming models such as Hadoop, Spark, and Storm are briefly described to analyze the advancement, constraints, and innovations. The key performance metrics of cloud computing applications such as core data sampling, modeling, and analyzing the competitiveness of big data was modeled by applying related hypotheses. Finally, it introduces a new design concept with cloud computing and eventually some recommendations focusing on cloud computing infrastructure, and methods for managing real-time big data in the power management system that solve the data mining challenges.
  7. Rahman MM, Muniyandi RC, Sahran S, Usman OL, Moniruzzaman M
    Sci Rep, 2024 Jul 09;14(1):15763.
    PMID: 38982129 DOI: 10.1038/s41598-024-66603-y
    The timely identification of autism spectrum disorder (ASD) in children is imperative to prevent potential challenges as they grow. When sharing data related to autism for an accurate diagnosis, safeguarding its security and privacy is a paramount concern to fend off unauthorized access, modification, or theft during transmission. Researchers have devised diverse security and privacy models or frameworks, most of which often leverage proprietary algorithms or adapt existing ones to address data leakage. However, conventional anonymization methods, although effective in the sanitization process, proved inadequate for the restoration process. Furthermore, despite numerous scholarly contributions aimed at refining the restoration process, the accuracy of restoration remains notably deficient. Based on the problems identified above, this paper presents a novel approach to data restoration for sanitized sensitive autism datasets with improved performance. In the prior study, we constructed an optimal key for the sanitization process utilizing the proposed Enhanced Combined PSO-GWO framework. This key was implemented to conceal sensitive autism data in the database, thus avoiding information leakage. In this research, the same key was employed during the data restoration process to enhance the accuracy of the original data recovery. Therefore, the study enhanced the restoration process for ASD data's security and privacy by utilizing an optimal key produced via the Enhanced Combined PSO-GWO framework. When compared to existing meta-heuristic algorithms, the simulation results from the autism data restoration experiments demonstrated highly competitive accuracies with 99.90%, 99.60%, 99.50%, 99.25%, and 99.70%, respectively. Among the four types of datasets used, this method outperforms other existing methods on the 30-month autism children dataset, mostly.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links