Displaying publications 21 - 40 of 766 in total

Abstract:
Sort:
  1. Kiah ML, Haiqi A, Zaidan BB, Zaidan AA
    Comput Methods Programs Biomed, 2014 Nov;117(2):360-82.
    PMID: 25070757 DOI: 10.1016/j.cmpb.2014.07.002
    The use of open source software in health informatics is increasingly advocated by authors in the literature. Although there is no clear evidence of the superiority of the current open source applications in the healthcare field, the number of available open source applications online is growing and they are gaining greater prominence. This repertoire of open source options is of a great value for any future-planner interested in adopting an electronic medical/health record system, whether selecting an existent application or building a new one. The following questions arise. How do the available open source options compare to each other with respect to functionality, usability and security? Can an implementer of an open source application find sufficient support both as a user and as a developer, and to what extent? Does the available literature provide adequate answers to such questions? This review attempts to shed some light on these aspects.
    Matched MeSH terms: Software*; Software Design*
  2. Kolda L, Krejcar O, Selamat A, Kuca K, Fadeyi O
    Sensors (Basel), 2019 Aug 26;19(17).
    PMID: 31455045 DOI: 10.3390/s19173709
    Biometric verification methods have gained significant popularity in recent times, which has brought about their extensive usage. In light of theoretical evidence surrounding the development of biometric verification, we proposed an experimental multi-biometric system for laboratory testing. First, the proposed system was designed such that it was able to identify and verify a user through the hand contour, and blood flow (blood stream) at the upper part of the hand. Next, we detailed the hard and software solutions for the system. A total of 40 subjects agreed to be a part of data generation team, which produced 280 hand images. The core of this paper lies in evaluating individual metrics, which are functions of frequency comparison of the double type faults with the EER (Equal Error Rate) values. The lowest value was measured for the case of the modified Hausdorff distance metric - Maximally Helicity Violating (MHV). Furthermore, for the verified biometric characteristics (Hamming distance and MHV), appropriate and suitable metrics have been proposed and experimented to optimize system precision. Thus, the EER value for the designed multi-biometric system in the context of this work was found to be 5%, which proves that metrics consolidation increases the precision of the multi-biometric system. Algorithms used for the proposed multi-biometric device shows that the individual metrics exhibit significant accuracy but perform better on consolidation, with a few shortcomings.
    Matched MeSH terms: Software
  3. Elias BBQ, Soh PJ, Al-Hadi AA, Akkaraekthalin P, Vandenbosch GAE
    Sensors (Basel), 2021 Apr 04;21(7).
    PMID: 33916507 DOI: 10.3390/s21072516
    This work presents the design and optimization of an antenna with defected ground structure (DGS) using characteristic mode analysis (CMA) to enhance bandwidth. This DGS is integrated with a rectangular patch with circular meandered rings (RPCMR) in a wearable format fully using textiles for wireless body area network (WBAN) application. For this integration process, both CMA and the method of moments (MoM) were applied using the same electromagnetic simulation software. This work characterizes and estimates the final shape and dimensions of the DGS using the CMA method, aimed at enhancing antenna bandwidth. The optimization of the dimensions and shape of the DGS is simplified, as the influence of the substrates and excitation is first excluded. This optimizes the required time and resources in the design process, in contrast to the conventional optimization approaches made using full wave "trial and error" simulations on a complete antenna structure. To validate the performance of the antenna on the body, the specific absorption rate is studied. Simulated and measured results indicate that the proposed antenna meets the requirements of wideband on-body operation.
    Matched MeSH terms: Software
  4. May Z, Alam MK, Husain K, Hasan MK
    PLoS One, 2020;15(8):e0238073.
    PMID: 32845901 DOI: 10.1371/journal.pone.0238073
    Transmission opportunity (TXOP) is a key factor to enable efficient channel bandwidth utilization over wireless campus networks (WCN) for interactive multimedia (IMM) applications. It facilitates in resource allocation for the similar categories of multiple packets transmission until the allocated time is expired. The static TXOP limits are defined for various categories of IMM traffics in the IEEE802.11e standard. Due to the variation of traffic load in WCN, the static TXOP limits are not sufficient enough to guarantee the quality of service (QoS) for IMM traffic flows. In order to address this issue, several existing works allocate the TXOP limits dynamically to ensure QoS for IMM traffics based on the current associated queue size and pre-setting threshold values. However, existing works do not take into account all the medium access control (MAC) overheads while estimating the current queue size which in turn is required for dynamic TXOP limits allocation. Hence, not considering MAC overhead appropriately results in inaccurate queue size estimation, thereby leading to inappropriate allocation of dynamic TXOP limits. In this article, an enhanced dynamic TXOP (EDTXOP) scheme is proposed that takes into account all the MAC overheads while estimating current queue size, thereby allocating appropriate dynamic TXOP limits within the pre-setting threshold values. In addition, the article presents an analytical estimation of the EDTXOP scheme to compute the dynamic TXOP limits for the current high priority traffic queues. Simulation results were carried out by varying traffic load in terms of packet size and packet arrival rate. The results show that the proposed EDTXOP scheme achieves the overall performance gains in the range of 4.41%-8.16%, 8.72%-11.15%, 14.43%-32% and 26.21%-50.85% for throughput, PDR, average ETE delay and average jitter, respectively when compared to the existing work. Hence, offering a better TXOP limit allocation solution than the rest.
    Matched MeSH terms: Software
  5. Hannah Nadiah Abdul Razak, Mohd. Azdi Maasar, Nur Hafidzah Hafidzuddin, Ernie Syufina Chun Lee
    MyJurnal
    The aim of this research is to apply the variance and conditional value at risk (CVaR) as risk measures in portfolio selection problem. Consequently, we are motivated to compare the behavior of two different type of risk measures (variance and CVaR) when the expected returns of a portfolio vary from a low return to a higher return. To obtain an optimum portfolio of the assets, we minimize the risks using mean variance and mean CVaR models. Dataset with stocks for FBMKLCI is used to generate our scenario returns. Both models and dataset are coded and implemented in AMPL software. We compared the performance of both optimized portfolios constructed from the models in term of risk measure and realized returns. The optimal portfolios are evaluated across three different target returns that represent the low risk low returns, medium risk medium returns and high risk high returns portfolios. Numerical results show that the composition of portfolios for mean variance are generally more diversified compared to mean CVaR portfolios. The in sample results show that the seven optimal mean CVaR0:05 portfolios have lower CVaR0:05 values as compared to their optimal mean variance counterparts. Consequently, the standard deviation for mean variance optimal portfolios are lower than the standard deviation of its mean CVaR0:05 counterparts. For the out of sample analysis, we can conclude that mean variance portfolio only minimizes standard deviation at low target return. While, mean CVaR portfolios are favorable in minimizing risks at high target return.
    Matched MeSH terms: Software
  6. ASSUNTA MALAR PATRICK VINCENT, HASSILAH SALLEH
    MyJurnal
    A wide range of studies have been conducted on deep learning to forecast time series data. However, very few researches have discussed the optimal number of hidden layers and nodes in each hidden layer of the architecture. It is crucial to study the number of hidden layers and nodes in each hidden layer as it controls the performance of the architecture. Apart from that, in the presence of the activation function, diverse computation between the hidden layers and output layer can take place. Therefore, in this study, the multilayer perceptron (MLP) architecture is developed using the Python software to forecast time series data. Then, the developed architecture is applied on the Apple Inc. stock price due to its volatile characteristic. Using historical prices, the accuracy of the forecast is measured by the different activation functions, number of hidden layers and size of data. The Keras deep learning library, which can be found in the Python software, is used to develop the MLP architecture to forecast the Apple Inc. stock price. The developed model is then applied on different cases, namely different sizes of data, different activation functions, different numbers of hidden layers of up to nine layers, and different numbers of nodes in each hidden layer. Then, the metrics mean squared error (MSE), mean absolute error (MAE) and root-mean-square error (RMSE) are employed to test the accuracy of the forecast. It is found that the architecture with rectified linear unit (ReLU) outperformed in every hidden layer and each case with the highest accuracy. To conclude, the optimal number of hidden layers differs in every case as there are other influencing factors.
    Matched MeSH terms: Software
  7. Kamal, Z.Z., Daud, A.H.M., Ashidi, M.I.N., Fadel, J.K.M.
    ASM Science Journal, 2007;1(2):87-100.
    MyJurnal
    Covering as much as 25% to 35% of the development cost, software testing is an integral part of the software development lifecycle. Despite its importance, the current software testing practice is still based on highly manual processes from the generation of test cases (i.e. from specifications) up to the actual execution of the test. These manually generated tests are sometimes executed using ad hoc approaches, typically requiring the construction of a test driver for the particular application under test. In addition, test engineers are also under pressure to test increasing lines of code in order to meet market demands for more software functionalities. While there are significant proliferations of helpful testing tools or research prototypes in the market, much of them do not adequately provide the right level of abstraction and automation as required by test engineers. In order to facilitate and address some of the aforementioned issues, an automated testing tool was developed, called SFIT, based on Java® technology. This paper describes the development, implementation and evaluation of SFIT. Two case studies involving the robustness assessment of an adder module and a Linda-based distributed shared memory implementation are described in order to demonstrate the applicability of SFIT as a helpful automated testing tool.
    Matched MeSH terms: Software
  8. Melisa Anak Adeh, Mohd Ibrahim Shapiai, Ayman Maliha, Muhammad Hafiz Md Zaini
    MyJurnal
    Nowadays, the applications of tracking moving object are commonly used in various
    areas especially in computer vision applications. There are many tracking algorithms
    have been introduced and they are divided into three groups which are generative
    trackers, discriminative trackers and hybrid trackers. One of the methods is TrackingLearning-Detection
    (TLD) framework which is an example of the hybrid trackers where
    combination between the generative trackers and the discriminative trackers occur. In
    TLD, the detector consists of three stages which are patch variance, ensemble classifier
    and KNearest Neighbor classifier. In the second stage, the ensemble classifier depends
    on simple pixel comparison hence, it is likely fail to offer a better generalization of the
    appearances of the target object in the detection process. In this paper, OnlineSequential
    Extreme Learning Machine (OS-ELM) was used to replace the ensemble
    classifier in the TLD framework. Besides that, different types of Haar-like features were
    used for the feature extraction process instead of using raw pixel value as the features.
    The objectives of this study are to improve the classifier in the second stage of detector
    in TLD framework by using Haar-like features as an input to the classifier and to get a
    more generalized detector in TLD framework by using OS-ELM based detector. The
    results showed that the proposed method performs better in Pedestrian 1 in terms of
    F-measure and also offers good performance in terms of Precision in four out of six
    videos.
    Matched MeSH terms: Software
  9. Maizon Mohd Darus, Haslinda Ibrahim, Sharmila Karim
    MATEMATIKA, 2017;33(1):113-118.
    MyJurnal
    A new method to construct the distinct Hamiltonian circuits in complete
    graphs is called Half Butterfly Method. The Half Butterfly Method used the concept
    of isomorphism in developing the distinct Hamiltonian circuits. Thus some theoretical
    works are presented throughout developing this method.
    Matched MeSH terms: Software
  10. Nur Ashida Salim, Muhammad Azizi Kaprowi, Ahmad Asri Abd Samat
    MyJurnal
    Space Vector Pulse Width Modulation (SVPWM) method is widely used as a modulation technique
    to drive a three-phase inverter. It is an advanced computational intensive method used in pulse width modulation (PWM) algorithm for the three-phase voltage source inverter. Compared with the other PWM techniques, SVPWM is easier to implement, thus, it is the most preferred technique among others. Mathematical model for SVPWM was developed using MATLAB/ Simulink software. In this paper, the interface between MATLAB Simulink with the three-phase inverter by using Arduino Uno microcontroller is proposed. Arduino Uno generates the SVPWM signals for Permanent Magnet Synchronous Motor (PMSM) and is described in this paper. This work consists of software and hardware implementations. Simulation was done via Matlab/Simulink software to verify the effectiveness of the system and to measure the percentage of Total Harmonic Distortion (THD). The results show that SVPWM technique is able to drive the three-phase inverter with the Arduino UNO.
    Matched MeSH terms: Software
  11. Ahmed BS, Sahib MA, Gambardella LM, Afzal W, Zamli KZ
    PLoS One, 2016;11(11):e0166150.
    PMID: 27829025 DOI: 10.1371/journal.pone.0166150
    Combinatorial test design is a plan of test that aims to reduce the amount of test cases systematically by choosing a subset of the test cases based on the combination of input variables. The subset covers all possible combinations of a given strength and hence tries to match the effectiveness of the exhaustive set. This mechanism of reduction has been used successfully in software testing research with t-way testing (where t indicates the interaction strength of combinations). Potentially, other systems may exhibit many similarities with this approach. Hence, it could form an emerging application in different areas of research due to its usefulness. To this end, more recently it has been applied in a few research areas successfully. In this paper, we explore the applicability of combinatorial test design technique for Fractional Order (FO), Proportional-Integral-Derivative (PID) parameter design controller, named as FOPID, for an automatic voltage regulator (AVR) system. Throughout the paper, we justify this new application theoretically and practically through simulations. In addition, we report on first experiments indicating its practical use in this field. We design different algorithms and adapted other strategies to cover all the combinations with an optimum and effective test set. Our findings indicate that combinatorial test design can find the combinations that lead to optimum design. Besides this, we also found that by increasing the strength of combination, we can approach to the optimum design in a way that with only 4-way combinatorial set, we can get the effectiveness of an exhaustive test set. This significantly reduced the number of tests needed and thus leads to an approach that optimizes design of parameters quickly.
    Matched MeSH terms: Software
  12. Al-Bashiri H, Abdulgabber MA, Romli A, Kahtan H
    PLoS One, 2018;13(10):e0204434.
    PMID: 30286123 DOI: 10.1371/journal.pone.0204434
    This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics.
    Matched MeSH terms: Software
  13. Abdulazeez Uba Muhammad, Kassim Abdulrahman Abdullah, Waleed Fekry Faris
    MyJurnal
    The best commonly applied approach in seating ergonomics is the concept that the seat must fit the sitter.
    Understanding of population anthropometry is necessary because, in the mass vehicle market, a single seat should fit
    a huge portion of the population. This research work proposes some automotive seat fit parameters based on a
    representative Nigerian anthropometric data, to ensure an optimum fit between the vehicle seats and the occupants,
    as well as providing adequate accommodation. Anthropometric data of 863 Nigerians captured with special emphasis
    on the dimensions that are applicable in automotive seat design. A comparison made between the data obtained and
    that of five other countries. The proposed dimensions includes: seat cushion width (475mm); seat cushion length
    (394mm); seat height (340mm); seat lateral location (583mm); seat back height (480mm); seat back width (427mm);
    armrest height (246mm); headrest height (703mm); armrest surface length (345mm); backrest width (thoracic level)
    (524mm); seat adjustment (186mm); backrest width (lumbar level) (475mm) and distance between armrests
    (475mm). A comparison made between the proposed dimensions and those recommended by four other scholars for
    other populations. Finally, an ergonomic automotive seat suitable for the Nigerian population was designed using
    AutoCAD 2016 software based on the proposed established dimensions.
    Matched MeSH terms: Software
  14. Higman S, Dwivedi V, Nsaghurwe A, Busiga M, Sotter Rulagirwa H, Smith D, et al.
    Int J Health Plann Manage, 2019 Jan;34(1):e85-e99.
    PMID: 30182517 DOI: 10.1002/hpm.2634
    BACKGROUND: Enterprise Architecture (EA) integrates business and technical processes in health information systems (HIS). Low-income and middle-income countries (LMIC) use EA to combine management components with disease tracking and health care service monitoring. Using an EA approach differs by country, addressing specific needs.

    METHODS: Articles in this review referenced EA, were peer-reviewed or gray literature reports published in 2010 to 2016 in English, and were identified using PubMed, Scopus, Web of Science, and Google Scholar.

    RESULTS: Fourteen articles described EA use in LMICs. India, Sierra Leone, South Africa, Mozambique, and Rwanda reported building the system to meet country needs and implement a cohesive HIS framework. Jordan and Taiwan focused on specific HIS aspects, ie, disease surveillance and electronic medical records. Five studies informed the context. The Millennium Villages Project employed a "uniform but contextualized" approach to guide systems in 10 countries; Malaysia, Indonesia, and Tanzania used interviews and mapping of existing components to improve HIS, and Namibia used of Activity Theory to identify technology-associated activities to better understand EA frameworks. South Africa, Burundi, Kenya, and Democratic Republic of Congo used EA to move from paper-based to electronic systems.

    CONCLUSIONS: Four themes emerged: the importance of multiple sectors and data sources, the need for interoperability, the ability to incorporate system flexibility, and the desirability of open group models, data standards, and software. Themes mapped to EA frameworks and operational components and to health system building blocks and goals. Most articles focused on processes rather than outcomes, as countries are engaged in implementation.

    Matched MeSH terms: Software Design
  15. Din IU, Kim BS, Hassan S, Guizani M, Atiquzzaman M, Rodrigues JJPC
    Sensors (Basel), 2018 Nov 15;18(11).
    PMID: 30445723 DOI: 10.3390/s18113957
    Information Centric Network (ICN) is expected to be the favorable deployable future Internet paradigm. ICN intends to replace the current IP-based model with the name-based content-centric model, as it aims at providing better security, scalability, and content distribution. However, it is a challenging task to conceive how ICN can be linked with the other most emerging paradigm, i.e., Vehicular Ad hoc Network (VANET). In this article, we present an overview of the ICN-based VANET approach in line with its contributions and research challenges.In addition, the connectivity issues of vehicular ICN model is presented with some other emerging paradigms, such as Software Defined Network (SDN), Cloud, and Edge computing. Moreover, some ICN-based VANET research opportunities, in terms of security, mobility, routing, naming, caching, and fifth generation (5G) communications, are also covered at the end of the paper.
    Matched MeSH terms: Software
  16. Lo SK, Liew CS, Tey KS, Mekhilef S
    Sensors (Basel), 2019 Oct 09;19(20).
    PMID: 31600904 DOI: 10.3390/s19204354
    The advancement of the Internet of Things (IoT) as a solution in diverse application domains has nurtured the expansion in the number of devices and data volume. Multiple platforms and protocols have been introduced and resulted in high device ubiquity and heterogeneity. However, currently available IoT architectures face challenges to accommodate the diversity in IoT devices or services operating under different operating systems and protocols. In this paper, we propose a new IoT architecture that utilizes the component-based design approach to create and define the loosely-coupled, standalone but interoperable service components for IoT systems. Furthermore, a data-driven feedback function is included as a key feature of the proposed architecture to enable a greater degree of system automation and to reduce the dependency on mankind for data analysis and decision-making. The proposed architecture aims to tackle device interoperability, system reusability and the lack of data-driven functionality issues. Using a real-world use case on a proof-of-concept prototype, we examined the viability and usability of the proposed architecture.
    Matched MeSH terms: Software
  17. Naderipour A, Abdul-Malek Z, Ramachandaramurthy VK, Kalam A, Miveh MR
    ISA Trans, 2019 Nov;94:352-369.
    PMID: 31078293 DOI: 10.1016/j.isatra.2019.04.025
    This paper proposes an improved hierarchical control strategy consists of a primary and a secondary layer for a three-phase 4-wire microgrid under unbalanced and nonlinear load conditions. The primary layer is comprised of a multi-loop control strategy to provide balanced output voltages, a harmonic compensator to reduce the total harmonic distortion (THD), and a droop-based scheme to achieve an accurate power sharing. At the secondary control layer, a reactive power compensator and a frequency restoration loop are designed to improve the accuracy of reactive power sharing and to restore the frequency deviation, respectively. Simulation studies and practical performance are carried out using the DIgSILENT Power Factory software and laboratory testing, to verify the effectiveness of the control strategy in both islanded and grid-connected mode. Zero reactive power sharing error and zero frequency steady-state error have given this control strategy an edge over the conventional control scheme. Furthermore, the proposed scheme presented outstanding voltage control performance, such as fast transient response and low voltage THD. The superiority of the proposed control strategy over the conventional filter-based control scheme is confirmed by the 2 line cycles decrease in the transient response. Additionally, the voltage THDs in islanded mode are reduced from above 5.1% to lower than 2.7% with the proposed control strategy under nonlinear load conditions. The current THD is also reduced from above 21% to lower than 2.4% in the connection point of the microgrid with the offered control scheme in the grid-connected mode.
    Matched MeSH terms: Software
  18. Mohd Agos Salim Nasir, Ahmad Izani Md Ismail
    Sains Malaysiana, 2013;42:341-346.
    A high-order uniform Cartesian grid compact finite difference scheme for the Goursat problem is developed. The basic idea of high-order compact schemes is to find the compact approximations to the derivatives terms by differentiating centrally the governing equations. Our compact scheme will approximate the derivative terms by involving the higher terms and reducing the number of grid points. The compact finite difference scheme is given for general form of the Goursat problem in uniform domain and illustrates the performance by applying a linear problem. Numerical experiments have been conducted with the new scheme and encouraging results have been obtained. In this paper we present the compact finite difference scheme for the Goursat problem. With the aid of computational software the scheme was programmed for determining the relative errors of linear Goursat problem.
    Matched MeSH terms: Software
  19. NUR IZZI MD.YUSOFF, MOHD ROSLI HAININ, MOUNIER D, AIREY GD
    Sains Malaysiana, 2013;42:1647-1654.
    According to the classical theory of viscoelasticity, a linear viscoelastic (LVE) function can be converted into another viscoelastic function even though they emphasize different information. In this study, dynamic tests were conducted on different conventional penetration grade bitumens using a dynamic shear rheometer (DSR) in the LVE region. The results showed that the dynamic data in the frequency domain can be converted into the time domain functions using a numerical technique. This was done with the aid of the non-linear regularization (NLREG) computer program. The NLREG software is a computer program for solving nonlinear ill-posed problem and is based on non-linear Tikhonov regularization method. The use of data interconversion equation is found suitable for converting from the frequency domain into the time domain of conventional penetration grade bitumens.
    Matched MeSH terms: Software
  20. Govindasamy P, Del Carmen Salazar M, Lerner J, Green KE
    Front Psychol, 2019;10:1363.
    PMID: 31258502 DOI: 10.3389/fpsyg.2019.01363
    This manuscript reports results of an empirical assessment of a newly developed measure designed to assess apprentice teaching proficiency. In this study, Many Facets Rasch model software was used to evaluate the psychometric quality of the Framework for Equitable and Effective Teaching (FEET), a rater-mediated assessment. The analysis focused on examining variability in (1) supervisor severity in ratings, (2) level of item difficulty, (3) time of assessment, and (4) teacher apprentice proficiency. Added validity evidence showed moderate correlation with self-reports of apprentice teaching. The findings showed support for the FEET as yielding reliable ratings with a need for added rater training.
    Matched MeSH terms: Software
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links