Displaying all 8 publications

Abstract:
Sort:
  1. Madni SHH, Abd Latiff MS, Abdullahi M, Abdulhamid SM, Usman MJ
    PLoS ONE, 2017;12(5):e0176321.
    PMID: 28467505 DOI: 10.1371/journal.pone.0176321
    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
    Matched MeSH terms: Cloud Computing*
  2. Aldeen YA, Salleh M, Aljeroudi Y
    J Biomed Inform, 2016 08;62:107-16.
    PMID: 27369566 DOI: 10.1016/j.jbi.2016.06.011
    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation.
    Matched MeSH terms: Cloud Computing*
  3. Al-Absi AA, Al-Sammarraie NA, Shaher Yafooz WM, Kang DK
    Biomed Res Int, 2018;2018:7501042.
    PMID: 30417014 DOI: 10.1155/2018/7501042
    MapReduce is the preferred cloud computing framework used in large data analysis and application processing. MapReduce frameworks currently in place suffer performance degradation due to the adoption of sequential processing approaches with little modification and thus exhibit underutilization of cloud resources. To overcome this drawback and reduce costs, we introduce a Parallel MapReduce (PMR) framework in this paper. We design a novel parallel execution strategy of Map and Reduce worker nodes. Our strategy enables further performance improvement and efficient utilization of cloud resources execution of Map and Reduce functions to utilize multicore environments available with computing nodes. We explain in detail makespan modeling and working principle of the PMR framework in the paper. Performance of PMR is compared with Hadoop through experiments considering three biomedical applications. Experiments conducted for BLAST, CAP3, and DeepBind biomedical applications report makespan time reduction of 38.92%, 18.00%, and 34.62% considering the PMR framework against Hadoop framework. Experiments' results prove that the PMR cloud computing platform proposed is robust, cost-effective, and scalable, which sufficiently supports diverse applications on public and private cloud platforms. Consequently, overall presentation and results indicate that there is good matching between theoretical makespan modeling presented and experimental values investigated.
    Matched MeSH terms: Cloud Computing
  4. Yıldırım Ö, Pławiak P, Tan RS, Acharya UR
    Comput. Biol. Med., 2018 11 01;102:411-420.
    PMID: 30245122 DOI: 10.1016/j.compbiomed.2018.09.009
    This article presents a new deep learning approach for cardiac arrhythmia (17 classes) detection based on long-duration electrocardiography (ECG) signal analysis. Cardiovascular disease prevention is one of the most important tasks of any health care system as about 50 million people are at risk of heart disease in the world. Although automatic analysis of ECG signal is very popular, current methods are not satisfactory. The goal of our research was to design a new method based on deep learning to efficiently and quickly classify cardiac arrhythmias. Described research are based on 1000 ECG signal fragments from the MIT - BIH Arrhythmia database for one lead (MLII) from 45 persons. Approach based on the analysis of 10-s ECG signal fragments (not a single QRS complex) is applied (on average, 13 times less classifications/analysis). A complete end-to-end structure was designed instead of the hand-crafted feature extraction and selection used in traditional methods. Our main contribution is to design a new 1D-Convolutional Neural Network model (1D-CNN). The proposed method is 1) efficient, 2) fast (real-time classification) 3) non-complex and 4) simple to use (combined feature extraction and selection, and classification in one stage). Deep 1D-CNN achieved a recognition overall accuracy of 17 cardiac arrhythmia disorders (classes) at a level of 91.33% and classification time per single sample of 0.015 s. Compared to the current research, our results are one of the best results to date, and our solution can be implemented in mobile devices and cloud computing.
    Matched MeSH terms: Cloud Computing
  5. Abdulhamid SM, Abd Latiff MS, Abdul-Salaam G, Hussain Madni SH
    PLoS ONE, 2016;11(7):e0158102.
    PMID: 27384239 DOI: 10.1371/journal.pone.0158102
    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
    Matched MeSH terms: Cloud Computing*
  6. Nagrath V, Morel O, Malik A, Saad N, Meriaudeau F
    Springerplus, 2015;4:103.
    PMID: 25763310 DOI: 10.1186/s40064-015-0810-4
    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
    Matched MeSH terms: Cloud Computing
  7. Shukla S, Hassan MF, Khan MK, Jung LT, Awang A
    PLoS ONE, 2019;14(11):e0224934.
    PMID: 31721807 DOI: 10.1371/journal.pone.0224934
    Fog computing (FC) is an evolving computing technology that operates in a distributed environment. FC aims to bring cloud computing features close to edge devices. The approach is expected to fulfill the minimum latency requirement for healthcare Internet-of-Things (IoT) devices. Healthcare IoT devices generate various volumes of healthcare data. This large volume of data results in high data traffic that causes network congestion and high latency. An increase in round-trip time delay owing to large data transmission and large hop counts between IoTs and cloud servers render healthcare data meaningless and inadequate for end-users. Time-sensitive healthcare applications require real-time data. Traditional cloud servers cannot fulfill the minimum latency demands of healthcare IoT devices and end-users. Therefore, communication latency, computation latency, and network latency must be reduced for IoT data transmission. FC affords the storage, processing, and analysis of data from cloud computing to a network edge to reduce high latency. A novel solution for the abovementioned problem is proposed herein. It includes an analytical model and a hybrid fuzzy-based reinforcement learning algorithm in an FC environment. The aim is to reduce high latency among healthcare IoTs, end-users, and cloud servers. The proposed intelligent FC analytical model and algorithm use a fuzzy inference system combined with reinforcement learning and neural network evolution strategies for data packet allocation and selection in an IoT-FC environment. The approach is tested on simulators iFogSim (Net-Beans) and Spyder (Python). The obtained results indicated the better performance of the proposed approach compared with existing methods.
    Matched MeSH terms: Cloud Computing*
  8. Wang LY, Lew SL, Lau SH, Leow MC
    Heliyon, 2019 Jun;5(6):e01788.
    PMID: 31198866 DOI: 10.1016/j.heliyon.2019.e01788
    In this ever-progressive digital era, conventional e-learning methods have become inadequate to handle the requirements of upgraded learning processes especially in the higher education. E-learning adopting Cloud computing is able to transform e-learning into a flexible, shareable, content-reusable, and scalable learning methodology. Despite plentiful Cloud e-learning frameworks have been proposed across literature, limited researches have been conducted to study the usability factors predicting continuance intention to use Cloud e-learning applications. In this study, five usability factors namely Computer Self Efficacy (CSE), Enjoyment (E), Perceived Ease of Use (PEU), Perceived Usefulness (PU), and User Perception (UP) have been identified for factor analysis. All the five independent variables were hypothesized to be positively associated to a dependent variable namely Continuance Intention (CI). A survey was conducted on 170 IT students in one of the private universities in Malaysia. The students were given one trimester to experience the usability of Cloud e-Learning application. As an instrument to analyse the usability factors towards continuance intention of the application, a questionnaire consisting thirty questions was formulated and used. The collected data were analysed using SMARTPLS 3.0. The results obtained from this study observed that computer self-efficacy and enjoyment as intrinsic motivations significantly predict continuance intention, while perceived ease of use, perceived usefulness and user perception were insignificant. This outcome implies that computer self-efficacy and enjoyment significantly affect the willingness of students to continue using Cloud e-learning application in their studies. The discussions and implications of this study are vital for researchers and practitioners of educational technologies in higher education.
    Matched MeSH terms: Cloud Computing
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (tengcl@gmail.com)

External Links