Displaying publications 1 - 20 of 92 in total

Abstract:
Sort:
  1. Lee HW, Ramayah T, Zakaria N
    J Med Syst, 2012 Aug;36(4):2129-40.
    PMID: 21384267 DOI: 10.1007/s10916-011-9675-4
    Studies related to healthcare ICT integration in Malaysia are relatively little, thus this paper provide a literature review of the integration of information and communication technologies (ICT) in the healthcare sector in Malaysia through the hospital information system (HIS). Our study emphasized on secondary data to investigate the factors related to ICT integration in healthcare through HIS. Therefore this paper aimed to gather an in depth understanding of issues related to HIS adoption, and contributing in fostering HIS adoption in Malaysia and other countries. This paper provides a direction for future research to study the correlation of factors affecting HIS adoption. Finally a research model is proposed using current adoption theories and external factors from human, technology, and organization perspectives.
  2. Hussien HM, Yasin SM, Udzir SNI, Zaidan AA, Zaidan BB
    J Med Syst, 2019 Sep 14;43(10):320.
    PMID: 31522262 DOI: 10.1007/s10916-019-1445-8
    Blockchain in healthcare applications requires robust security and privacy mechanism for high-level authentication, interoperability and medical records sharing to comply with the strict legal requirements of the Health Insurance Portability and Accountability Act of 1996. Blockchain technology in the healthcare industry has received considerable research attention in recent years. This study conducts a review to substantially analyse and map the research landscape of current technologies, mainly the use of blockchain in healthcare applications, into a coherent taxonomy. The present study systematically searches all relevant research articles on blockchain in healthcare applications in three accessible databases, namely, ScienceDirect, IEEE and Web of Science, by using the defined keywords 'blockchain', 'healthcare' and 'electronic health records' and their variations. The final set of collected articles related to the use of blockchain in healthcare application is divided into three categories. The first category includes articles (i.e. 43/58 scientific articles) that attempted to develop and design healthcare applications integrating blockchain, particularly those on new architecture, system designs, framework, scheme, model, platform, approach, protocol and algorithm. The second category includes studies (i.e., 6/58 scientific articles) that attempted to evaluate and analyse the adoption of blockchain in the healthcare system. Finally, the third category comprises review and survey articles (i.e., 6/58 scientific articles) related to the integration of blockchain into healthcare applications. The final articles for review are discussed on the basis of five aspects: (1) year of publication, (2) nationality of authors, (3) publishing house or journal, (4) purpose of using blockchain in health applications and the corresponding contributions and (5) problem types and proposed solutions. Additionally, this study provides identified motivations, open challenges and recommendations on the use of blockchain in healthcare applications. The current research contributes to the literature by providing a detailed review of feasible alternatives and identifying the research gaps. Accordingly, researchers and developers are provided with appealing opportunities to further develop decentralised healthcare applications through a comprehensive discussion of about the importance of blockchain and its integration into various healthcare applications.
  3. Kiah ML, Nabi MS, Zaidan BB, Zaidan AA
    J Med Syst, 2013 Oct;37(5):9971.
    PMID: 24037086 DOI: 10.1007/s10916-013-9971-2
    This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.
  4. Hamada M, Zaidan BB, Zaidan AA
    J Med Syst, 2018 Jul 24;42(9):162.
    PMID: 30043178 DOI: 10.1007/s10916-018-1020-8
    The study of electroencephalography (EEG) signals is not a new topic. However, the analysis of human emotions upon exposure to music considered as important direction. Although distributed in various academic databases, research on this concept is limited. To extend research in this area, the researchers explored and analysed the academic articles published within the mentioned scope. Thus, in this paper a systematic review is carried out to map and draw the research scenery for EEG human emotion into a taxonomy. Systematically searched all articles about the, EEG human emotion based music in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 1999 to 2016. These databases feature academic studies that used EEG to measure brain signals, with a focus on the effects of music on human emotions. The screening and filtering of articles were performed in three iterations. In the first iteration, duplicate articles were excluded. In the second iteration, the articles were filtered according to their titles and abstracts, and articles outside of the scope of our domain were excluded. In the third iteration, the articles were filtered by reading the full text and excluding articles outside of the scope of our domain and which do not meet our criteria. Based on inclusion and exclusion criteria, 100 articles were selected and separated into five classes. The first class includes 39 articles (39%) consists of emotion, wherein various emotions are classified using artificial intelligence (AI). The second class includes 21 articles (21%) is composed of studies that use EEG techniques. This class is named 'brain condition'. The third class includes eight articles (8%) is related to feature extraction, which is a step before emotion classification. That this process makes use of classifiers should be noted. However, these articles are not listed under the first class because these eight articles focus on feature extraction rather than classifier accuracy. The fourth class includes 26 articles (26%) comprises studies that compare between or among two or more groups to identify and discover human emotion-based EEG. The final class includes six articles (6%) represents articles that study music as a stimulus and its impact on brain signals. Then, discussed the five main categories which are action types, age of the participants, and number size of the participants, duration of recording and listening to music and lastly countries or authors' nationality that published these previous studies. it afterward recognizes the main characteristics of this promising area of science in: motivation of using EEG process for measuring human brain signals, open challenges obstructing employment and recommendations to improve the utilization of EEG process.
  5. Acharya UR, Fernandes SL, WeiKoh JE, Ciaccio EJ, Fabell MKM, Tanik UJ, et al.
    J Med Syst, 2019 Aug 09;43(9):302.
    PMID: 31396722 DOI: 10.1007/s10916-019-1428-9
    The aim of this work is to develop a Computer-Aided-Brain-Diagnosis (CABD) system that can determine if a brain scan shows signs of Alzheimer's disease. The method utilizes Magnetic Resonance Imaging (MRI) for classification with several feature extraction techniques. MRI is a non-invasive procedure, widely adopted in hospitals to examine cognitive abnormalities. Images are acquired using the T2 imaging sequence. The paradigm consists of a series of quantitative techniques: filtering, feature extraction, Student's t-test based feature selection, and k-Nearest Neighbor (KNN) based classification. Additionally, a comparative analysis is done by implementing other feature extraction procedures that are described in the literature. Our findings suggest that the Shearlet Transform (ST) feature extraction technique offers improved results for Alzheimer's diagnosis as compared to alternative methods. The proposed CABD tool with the ST + KNN technique provided accuracy of 94.54%, precision of 88.33%, sensitivity of 96.30% and specificity of 93.64%. Furthermore, this tool also offered an accuracy, precision, sensitivity and specificity of 98.48%, 100%, 96.97% and 100%, respectively, with the benchmark MRI database.
  6. Hariharan M, Chee LS, Ai OC, Yaacob S
    J Med Syst, 2012 Jun;36(3):1821-30.
    PMID: 21249515 DOI: 10.1007/s10916-010-9641-6
    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.
  7. Hariharan M, Chee LS, Yaacob S
    J Med Syst, 2012 Jun;36(3):1309-15.
    PMID: 20844933 DOI: 10.1007/s10916-010-9591-z
    Acoustic analysis of infant cry signals has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for linear prediction cepstral coefficients (LPCCs) to provide the robust representation of infant cry signals. Three classes of infant cry signals were considered such as normal cry signals, cry signals from deaf babies and babies with asphyxia. A Probabilistic Neural Network (PNN) is suggested to classify the infant cry signals into normal and pathological cries. PNN is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 98% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from cry signals.
  8. Maarop N, Win KT
    J Med Syst, 2012 Oct;36(5):2881-92.
    PMID: 21826500 DOI: 10.1007/s10916-011-9766-2
    The aim of this study was to explore the importance of service need along with perceived technology attributes in potentially influence the acceptance of teleconsultation. The study was conducted based on the concurrent triangulation design involving qualitative and quantitative study methods. These entailed interviews with key informants and questionnaires survey of health care providers who practiced in the participating hospitals in Malaysia. Thematic analysis involving iterative coding was conducted on qualitative data. Scale reliability test and hypothesis testing procedures were performed on quantitative data. Subsequently, both data were merged, compared and interpreted. In particular, this study utilized a qualitative priority such that a superior emphasis was placed on the qualitative method to demonstrate an overall understanding. Based on the responses of 20 key informants, there was a significant need for teleconsultation as a tool to extend health services to patients under constrained resources and critical conditions. Apparently, the latest attributes of teleconsultation technology have generally met users' expectation but rather perceived as supportive facets in encouraging the usage. Concurrently, based on the survey engaging 72 health care providers, teleconsultation acceptance was statistically proven to be strongly associated with service need and not originated exclusively from the technological attributes. Additionally, the results of this study can be used to promote teleconsultation as an effective means in delivering better health services. Thus, the categories emerged from this study may be further revised and examined for explaining the acceptance of teleconsultation technology in other relevant contexts.
  9. Oung QW, Muthusamy H, Basah SN, Lee H, Vijean V
    J Med Syst, 2017 Dec 29;42(2):29.
    PMID: 29288342 DOI: 10.1007/s10916-017-0877-2
    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.
  10. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
  11. Sim KS, Lai MA, Tso CP, Teo CC
    J Med Syst, 2011 Feb;35(1):39-48.
    PMID: 20703587 DOI: 10.1007/s10916-009-9339-9
    A novel technique to quantify the signal-to-noise ratio (SNR) of magnetic resonance images is developed. The image SNR is quantified by estimating the amplitude of the signal spectrum using the autocorrelation function of just one single magnetic resonance image. To test the performance of the quantification, SNR measurement data are fitted to theoretically expected curves. It is shown that the technique can be implemented in a highly efficient way for the magnetic resonance imaging system.
  12. Tan CH, Teh YW
    J Med Syst, 2013 Aug;37(4):9950.
    PMID: 23709190 DOI: 10.1007/s10916-013-9950-7
    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.
  13. Noor NM, Than JC, Rijal OM, Kassim RM, Yunus A, Zeki AA, et al.
    J Med Syst, 2015 Mar;39(3):22.
    PMID: 25666926 DOI: 10.1007/s10916-015-0214-6
    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.
  14. Saba L, Than JC, Noor NM, Rijal OM, Kassim RM, Yunus A, et al.
    J Med Syst, 2016 Jun;40(6):142.
    PMID: 27114353 DOI: 10.1007/s10916-016-0504-7
    Human interaction has become almost mandatory for an automated medical system wishing to be accepted by clinical regulatory agencies such as Food and Drug Administration. Since this interaction causes variability in the gathered data, the inter-observer and intra-observer variability must be analyzed in order to validate the accuracy of the system. This study focuses on the variability from different observers that interact with an automated lung delineation system that relies on human interaction in the form of delineation of the lung borders. The database consists of High Resolution Computed Tomography (HRCT): 15 normal and 81 diseased patients' images taken retrospectively at five levels per patient. Three observers manually delineated the lungs borders independently and using software called ImgTracer™ (AtheroPoint™, Roseville, CA, USA) to delineate the lung boundaries in all five levels of 3-D lung volume. The three observers consisted of Observer-1: lesser experienced novice tracer who is a resident in radiology under the guidance of radiologist, whereas Observer-2 and Observer-3 are lung image scientists trained by lung radiologist and biomedical imaging scientist and experts. The inter-observer variability can be shown by comparing each observer's tracings to the automated delineation and also by comparing each manual tracing of the observers with one another. The normality of the tracings was tested using D'Agostino-Pearson test and all observers tracings showed a normal P-value higher than 0.05. The analysis of variance (ANOVA) test between three observers and automated showed a P-value higher than 0.89 and 0.81 for the right lung (RL) and left lung (LL), respectively. The performance of the automated system was evaluated using Dice Similarity Coefficient (DSC), Jaccard Index (JI) and Hausdorff (HD) Distance measures. Although, Observer-1 has lesser experience compared to Obsever-2 and Obsever-3, the Observer Deterioration Factor (ODF) shows that Observer-1 has less than 10% difference compared to the other two, which is under acceptable range as per our analysis. To compare between observers, this study used regression plots, Bland-Altman plots, two tailed T-test, Mann-Whiney, Chi-Squared tests which showed the following P-values for RL and LL: (i) Observer-1 and Observer-3 were: 0.55, 0.48, 0.29 for RL and 0.55, 0.59, 0.29 for LL; (ii) Observer-1 and Observer-2 were: 0.57, 0.50, 0.29 for RL and 0.54, 0.59, 0.29 for LL; (iii) Observer-2 and Observer-3 were: 0.98, 0.99, 0.29 for RL and 0.99, 0.99, 0.29 for LL. Further, CC and R-squared coefficients were computed between observers which came out to be 0.9 for RL and LL. All three observers however manage to show the feature that diseased lungs are smaller than normal lungs in terms of area.
  15. Salman OH, Rasid MF, Saripan MI, Subramaniam SK
    J Med Syst, 2014 Sep;38(9):103.
    PMID: 25047520 DOI: 10.1007/s10916-014-0103-4
    The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
  16. Shyamsunder R, Eswaran C, Sriraam N
    J Med Syst, 2007 Apr;31(2):109-16.
    PMID: 17489503
    The volume of patient monitoring video acquired in hospitals is very huge and hence there is a need for better compression of the same for effective storage and transmission. This paper presents a new motion segmentation technique, which improves the compression of patient monitoring video. The proposed motion segmentation technique makes use of a binary mask, which is obtained by thresholding the standard deviation values of the pixels along the temporal axis. Two compression methods, which make use of the proposed motion segmentation technique, are presented. The first method uses MPEG-4 coder and 9/7-biorthogonal wavelet for compressing the moving and stationary portions of the video respectively. The second method uses 5/3-biorthogonal wavelet for compressing both the moving and the stationary portions of the video. The performances of these compression algorithms are evaluated in terms of PSNR and bitrate. From the experimental results, it is found that the proposed motion technique improves the performance of the MPEG-4 coder. Among the two compression methods presented, the MPEG-4 based method performs better for bitrates less than 767 Kbps whereas for bitrates above 767 Kbps the performance of the wavelet based method is found superior.
  17. Srinivasan V, Eswaran C, Sriraam N
    J Med Syst, 2005 Dec;29(6):647-60.
    PMID: 16235818
    Electroencephalogram (EEG) signal plays an important role in the diagnosis of epilepsy. The long-term EEG recordings of an epileptic patient obtained from the ambulatory recording systems contain a large volume of EEG data. Detection of the epileptic activity requires a time consuming analysis of the entire length of the EEG data by an expert. The traditional methods of analysis being tedious, many automated diagnostic systems for epilepsy have emerged in recent years. This paper discusses an automated diagnostic method for epileptic detection using a special type of recurrent neural network known as Elman network. The experiments are carried out by using time-domain as well as frequency-domain features of the EEG signal. Experimental results show that Elman network yields epileptic detection accuracy rates as high as 99.6% with a single input feature which is better than the results obtained by using other types of neural networks with two and more input features.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links