Displaying publications 1 - 20 of 25 in total

Abstract:
Sort:
  1. Nagrath V, Morel O, Malik A, Saad N, Meriaudeau F
    Springerplus, 2015;4:103.
    PMID: 25763310 DOI: 10.1186/s40064-015-0810-4
    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
    Matched MeSH terms: Cloud Computing
  2. Aldeen YA, Salleh M, Aljeroudi Y
    J Biomed Inform, 2016 08;62:107-16.
    PMID: 27369566 DOI: 10.1016/j.jbi.2016.06.011
    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation.
    Matched MeSH terms: Cloud Computing*
  3. Abdulhamid SM, Abd Latiff MS, Abdul-Salaam G, Hussain Madni SH
    PLoS One, 2016;11(7):e0158102.
    PMID: 27384239 DOI: 10.1371/journal.pone.0158102
    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
    Matched MeSH terms: Cloud Computing*
  4. Abdullahi M, Ngadi MA
    PLoS One, 2016;11(6):e0158229.
    PMID: 27348127 DOI: 10.1371/journal.pone.0158229
    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
    Matched MeSH terms: Cloud Computing
  5. Madni SHH, Abd Latiff MS, Abdullahi M, Abdulhamid SM, Usman MJ
    PLoS One, 2017;12(5):e0176321.
    PMID: 28467505 DOI: 10.1371/journal.pone.0176321
    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
    Matched MeSH terms: Cloud Computing*
  6. Ahmed AA, Xue Li C
    J Forensic Sci, 2018 Jan;63(1):112-121.
    PMID: 28397244 DOI: 10.1111/1556-4029.13506
    Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime.
    Matched MeSH terms: Cloud Computing
  7. Yıldırım Ö, Pławiak P, Tan RS, Acharya UR
    Comput Biol Med, 2018 11 01;102:411-420.
    PMID: 30245122 DOI: 10.1016/j.compbiomed.2018.09.009
    This article presents a new deep learning approach for cardiac arrhythmia (17 classes) detection based on long-duration electrocardiography (ECG) signal analysis. Cardiovascular disease prevention is one of the most important tasks of any health care system as about 50 million people are at risk of heart disease in the world. Although automatic analysis of ECG signal is very popular, current methods are not satisfactory. The goal of our research was to design a new method based on deep learning to efficiently and quickly classify cardiac arrhythmias. Described research are based on 1000 ECG signal fragments from the MIT - BIH Arrhythmia database for one lead (MLII) from 45 persons. Approach based on the analysis of 10-s ECG signal fragments (not a single QRS complex) is applied (on average, 13 times less classifications/analysis). A complete end-to-end structure was designed instead of the hand-crafted feature extraction and selection used in traditional methods. Our main contribution is to design a new 1D-Convolutional Neural Network model (1D-CNN). The proposed method is 1) efficient, 2) fast (real-time classification) 3) non-complex and 4) simple to use (combined feature extraction and selection, and classification in one stage). Deep 1D-CNN achieved a recognition overall accuracy of 17 cardiac arrhythmia disorders (classes) at a level of 91.33% and classification time per single sample of 0.015 s. Compared to the current research, our results are one of the best results to date, and our solution can be implemented in mobile devices and cloud computing.
    Matched MeSH terms: Cloud Computing
  8. Al-Absi AA, Al-Sammarraie NA, Shaher Yafooz WM, Kang DK
    Biomed Res Int, 2018;2018:7501042.
    PMID: 30417014 DOI: 10.1155/2018/7501042
    MapReduce is the preferred cloud computing framework used in large data analysis and application processing. MapReduce frameworks currently in place suffer performance degradation due to the adoption of sequential processing approaches with little modification and thus exhibit underutilization of cloud resources. To overcome this drawback and reduce costs, we introduce a Parallel MapReduce (PMR) framework in this paper. We design a novel parallel execution strategy of Map and Reduce worker nodes. Our strategy enables further performance improvement and efficient utilization of cloud resources execution of Map and Reduce functions to utilize multicore environments available with computing nodes. We explain in detail makespan modeling and working principle of the PMR framework in the paper. Performance of PMR is compared with Hadoop through experiments considering three biomedical applications. Experiments conducted for BLAST, CAP3, and DeepBind biomedical applications report makespan time reduction of 38.92%, 18.00%, and 34.62% considering the PMR framework against Hadoop framework. Experiments' results prove that the PMR cloud computing platform proposed is robust, cost-effective, and scalable, which sufficiently supports diverse applications on public and private cloud platforms. Consequently, overall presentation and results indicate that there is good matching between theoretical makespan modeling presented and experimental values investigated.
    Matched MeSH terms: Cloud Computing
  9. Wang LY, Lew SL, Lau SH, Leow MC
    Heliyon, 2019 Jun;5(6):e01788.
    PMID: 31198866 DOI: 10.1016/j.heliyon.2019.e01788
    In this ever-progressive digital era, conventional e-learning methods have become inadequate to handle the requirements of upgraded learning processes especially in the higher education. E-learning adopting Cloud computing is able to transform e-learning into a flexible, shareable, content-reusable, and scalable learning methodology. Despite plentiful Cloud e-learning frameworks have been proposed across literature, limited researches have been conducted to study the usability factors predicting continuance intention to use Cloud e-learning applications. In this study, five usability factors namely Computer Self Efficacy (CSE), Enjoyment (E), Perceived Ease of Use (PEU), Perceived Usefulness (PU), and User Perception (UP) have been identified for factor analysis. All the five independent variables were hypothesized to be positively associated to a dependent variable namely Continuance Intention (CI). A survey was conducted on 170 IT students in one of the private universities in Malaysia. The students were given one trimester to experience the usability of Cloud e-Learning application. As an instrument to analyse the usability factors towards continuance intention of the application, a questionnaire consisting thirty questions was formulated and used. The collected data were analysed using SMARTPLS 3.0. The results obtained from this study observed that computer self-efficacy and enjoyment as intrinsic motivations significantly predict continuance intention, while perceived ease of use, perceived usefulness and user perception were insignificant. This outcome implies that computer self-efficacy and enjoyment significantly affect the willingness of students to continue using Cloud e-learning application in their studies. The discussions and implications of this study are vital for researchers and practitioners of educational technologies in higher education.
    Matched MeSH terms: Cloud Computing
  10. Shukla S, Hassan MF, Khan MK, Jung LT, Awang A
    PLoS One, 2019;14(11):e0224934.
    PMID: 31721807 DOI: 10.1371/journal.pone.0224934
    Fog computing (FC) is an evolving computing technology that operates in a distributed environment. FC aims to bring cloud computing features close to edge devices. The approach is expected to fulfill the minimum latency requirement for healthcare Internet-of-Things (IoT) devices. Healthcare IoT devices generate various volumes of healthcare data. This large volume of data results in high data traffic that causes network congestion and high latency. An increase in round-trip time delay owing to large data transmission and large hop counts between IoTs and cloud servers render healthcare data meaningless and inadequate for end-users. Time-sensitive healthcare applications require real-time data. Traditional cloud servers cannot fulfill the minimum latency demands of healthcare IoT devices and end-users. Therefore, communication latency, computation latency, and network latency must be reduced for IoT data transmission. FC affords the storage, processing, and analysis of data from cloud computing to a network edge to reduce high latency. A novel solution for the abovementioned problem is proposed herein. It includes an analytical model and a hybrid fuzzy-based reinforcement learning algorithm in an FC environment. The aim is to reduce high latency among healthcare IoTs, end-users, and cloud servers. The proposed intelligent FC analytical model and algorithm use a fuzzy inference system combined with reinforcement learning and neural network evolution strategies for data packet allocation and selection in an IoT-FC environment. The approach is tested on simulators iFogSim (Net-Beans) and Spyder (Python). The obtained results indicated the better performance of the proposed approach compared with existing methods.
    Matched MeSH terms: Cloud Computing*
  11. Vadla, Pradeep Kumar, Kolla, Bhanu Prakash, Perumal, Thinagaran
    MyJurnal
    Cloud Computing provides a solution to enterprise applications in resolving their services at all level of Software, Platform, and Infrastructure. The current demand of resources for large enterprises and their specific requirement to solve critical issues of services to their clients like avoiding resources contention, vendor lock-in problems and achieving high QoS (Quality of Service) made them move towards the federated cloud. The reliability of the cloud has become a challenge for cloud providers to provide resources at an instance request satisfying all SLA (Service Level Agreement) requirements for different consumer applications. To have better collation among cloud providers, FLA (Federated Level Agreement) are given much importance to get consensus in terms of various KPI’s (Key Performance Indicator’s) of the individual cloud providers. This paper proposes an FLASLA Aware Cloud Collation Formation algorithm (FS-ACCF) considering both FLA and SLA as major features affecting the collation formation to satisfy consumer request instantly. In FS-ACCF algorithm, fuzzy preference relationship multi-decision approach was used to validate the preferences among cloud providers for forming collation and gaining maximum profit. Finally, the results of FS-ACCF were compared with S-ACCF (SLA Aware Collation Formation) algorithm for 6 to 10 consecutive requests of cloud consumers with varied VM configurations for different SLA parameters like response time, process time and availability.
    Matched MeSH terms: Cloud Computing
  12. Nur Ahada Kamaruddin, Ibrahim Mohamed, Ahmad Dahari Jarno, Maslina Daud
    MyJurnal
    Cloud computing technology has succeeded in attracting the interest of both academics and industries because of its ability to provide flexible, cost-effective, and adaptable services in IT solution deployment. The services offered to Cloud Service Subscriber (CSS) are based on the concept of on-demand self-service, scalability, and rapid elasticity, which allows fast deployment of IT solutions, whilst leads to possible misconfiguration, un-patched system, etc. which, allows security threats to compromise the cloud services operations. From the viewpoint of Cloud Service Provider (CSP), incidents such as data loss and information breach, will tarnish their reputations, whilst allow them to conserve the issues internally, in which there is no transparency between CSP and CSS. In the aspects of information security, CSP is encouraged to practice cybersecurity in their cloud services by adopting ISO/IEC27017:2015 inclusive of all additional security controls as mandatory requirements. This study was conducted to identify factors that are influencing the CSP readiness level in the cybersecurity implementation of their cloud services by leveraging the developed pre-assessment model to determine the level of cloud security readiness. Approached the study is based on the combination of qualitative and quantitative assessment method in validating the proposed model through interview and prototype testing. The findings of this study had shown that factors that influence the CSP level of cloud security readiness are based on these domains; technology, organisation, policy, stakeholders, culture, knowledge, and environment. The contribution of the study as a Pre-Assessment Model for CSP which is suitable to be used as a guideline to provide a safer cloud computing environment.
    Matched MeSH terms: Cloud Computing
  13. Alnajrani HM, Norman AA, Ahmed BH
    PLoS One, 2020;15(6):e0234312.
    PMID: 32525944 DOI: 10.1371/journal.pone.0234312
    As a result of a shift in the world of technology, the combination of ubiquitous mobile networks and cloud computing produced the mobile cloud computing (MCC) domain. As a consequence of a major concern of cloud users, privacy and data protection are getting substantial attention in the field. Currently, a considerable number of papers have been published on MCC with a growing interest in privacy and data protection. Along with this advance in MCC, however, no specific investigation highlights the results of the existing studies in privacy and data protection. In addition, there are no particular exploration highlights trends and open issues in the domain. Accordingly, the objective of this paper is to highlight the results of existing primary studies published in privacy and data protection in MCC to identify current trends and open issues. In this investigation, a systematic mapping study was conducted with a set of six research questions. A total of 1711 studies published from 2009 to 2019 were obtained. Following a filtering process, a collection of 74 primary studies were selected. As a result, the present data privacy threats, attacks, and solutions were identified. Also, the ongoing trends of data privacy exercise were observed. Moreover, the most utilized measures, research type, and contribution type facets were emphasized. Additionally, the current open research issues in privacy and data protection in MCC were highlighted. Furthermore, the results demonstrate the current state-of-the-art of privacy and data protection in MCC, and the conclusion will help to identify research trends and open issues in MCC for researchers and offer useful information in MCC for practitioners.
    Matched MeSH terms: Cloud Computing*
  14. Hussien HM, Yasin SM, Udzir NI, Ninggal MIH
    Sensors (Basel), 2021 Apr 02;21(7).
    PMID: 33918266 DOI: 10.3390/s21072462
    Blockchain technology provides a tremendous opportunity to transform current personal health record (PHR) systems into a decentralised network infrastructure. However, such technology possesses some drawbacks, such as issues in privacy and storage capacity. Given its transparency and decentralised features, medical data are visible to everyone on the network and are inappropriate for certain medical applications. By contrast, storing vast medical data, such as patient medical history, laboratory tests, X-rays, and MRIs, significantly affect the repository storage of blockchain. This study bridges the gap between PHRs and blockchain technology by offloading the vast medical data into the InterPlanetary File System (IPFS) storage and establishing an enforced cryptographic authorisation and access control scheme for outsourced encrypted medical data. The access control scheme is constructed on the basis of the new lightweight cryptographic concept named smart contract-based attribute-based searchable encryption (SC-ABSE). This newly cryptographic primitive is developed by extending ciphertext-policy attribute-based encryption (CP-ABE) and searchable symmetric encryption (SSE) and by leveraging the technology of smart contracts to achieve the following: (1) efficient and secure fine-grained access control of outsourced encrypted data, (2) confidentiality of data by eliminating trusted private key generators, and (3) multikeyword searchable mechanism. Based on decisional bilinear Diffie-Hellman hardness assumptions (DBDH) and discrete logarithm (DL) problems, the rigorous security indistinguishability analysis indicates that SC-ABSE is secure against the chosen-keyword attack (CKA) and keyword secrecy (KS) in the standard model. In addition, user collusion attacks are prevented, and the tamper-proof resistance of data is ensured. Furthermore, security validation is verified by simulating a formal verification scenario using Automated Validation of Internet Security Protocols and Applications (AVISPA), thereby unveiling that SC-ABSE is resistant to man-in-the-middle (MIM) and replay attacks. The experimental analysis utilised real-world datasets to demonstrate the efficiency and utility of SC-ABSE in terms of computation overhead, storage cost and communication overhead. The proposed scheme is also designed and developed to evaluate throughput and latency transactions using a standard benchmark tool known as Caliper. Lastly, simulation results show that SC-ABSE has high throughput and low latency, with an ultimate increase in network life compared with traditional healthcare systems.
    Matched MeSH terms: Cloud Computing
  15. Albowarab MH, Zakaria NA, Zainal Abidin Z
    Sensors (Basel), 2021 May 12;21(10).
    PMID: 34065920 DOI: 10.3390/s21103356
    Various aspects of task execution load balancing of Internet of Things (IoTs) networks can be optimised using intelligent algorithms provided by software-defined networking (SDN). These load balancing aspects include makespan, energy consumption, and execution cost. While past studies have evaluated load balancing from one or two aspects, none has explored the possibility of simultaneously optimising all aspects, namely, reliability, energy, cost, and execution time. For the purposes of load balancing, implementing multi-objective optimisation (MOO) based on meta-heuristic searching algorithms requires assurances that the solution space will be thoroughly explored. Optimising load balancing provides not only decision makers with optimised solutions but a rich set of candidate solutions to choose from. Therefore, the purposes of this study were (1) to propose a joint mathematical formulation to solve load balancing challenges in cloud computing and (2) to propose two multi-objective particle swarm optimisation (MP) models; distance angle multi-objective particle swarm optimization (DAMP) and angle multi-objective particle swarm optimization (AMP). Unlike existing models that only use crowding distance as a criterion for solution selection, our MP models probabilistically combine both crowding distance and crowding angle. More specifically, we only selected solutions that had more than a 0.5 probability of higher crowding distance and higher angular distribution. In addition, binary variants of the approaches were generated based on transfer function, and they were denoted by binary DAMP (BDAMP) and binary AMP (BAMP). After using MOO mathematical functions to compare our models, BDAMP and BAMP, with state of the standard models, BMP, BDMP and BPSO, they were tested using the proposed load balancing model. Both tests proved that our DAMP and AMP models were far superior to the state of the art standard models, MP, crowding distance multi-objective particle swarm optimisation (DMP), and PSO. Therefore, this study enables the incorporation of meta-heuristic in the management layer of cloud networks.
    Matched MeSH terms: Cloud Computing
  16. Mutlag AA, Ghani MKA, Mohammed MA, Lakhan A, Mohd O, Abdulkareem KH, et al.
    Sensors (Basel), 2021 Oct 19;21(20).
    PMID: 34696135 DOI: 10.3390/s21206923
    In the last decade, the developments in healthcare technologies have been increasing progressively in practice. Healthcare applications such as ECG monitoring, heartbeat analysis, and blood pressure control connect with external servers in a manner called cloud computing. The emerging cloud paradigm offers different models, such as fog computing and edge computing, to enhance the performances of healthcare applications with minimum end-to-end delay in the network. However, many research challenges exist in the fog-cloud enabled network for healthcare applications. Therefore, in this paper, a Critical Healthcare Task Management (CHTM) model is proposed and implemented using an ECG dataset. We design a resource scheduling model among fog nodes at the fog level. A multi-agent system is proposed to provide the complete management of the network from the edge to the cloud. The proposed model overcomes the limitations of providing interoperability, resource sharing, scheduling, and dynamic task allocation to manage critical tasks significantly. The simulation results show that our model, in comparison with the cloud, significantly reduces the network usage by 79%, the response time by 90%, the network delay by 65%, the energy consumption by 81%, and the instance cost by 80%.
    Matched MeSH terms: Cloud Computing*
  17. Ahmad Z, Jehangiri AI, Ala'anzy MA, Othman M, Umar AI
    Sensors (Basel), 2021 Oct 30;21(21).
    PMID: 34770545 DOI: 10.3390/s21217238
    Cloud computing is a fully fledged, matured and flexible computing paradigm that provides services to scientific and business applications in a subscription-based environment. Scientific applications such as Montage and CyberShake are organized scientific workflows with data and compute-intensive tasks and also have some special characteristics. These characteristics include the tasks of scientific workflows that are executed in terms of integration, disintegration, pipeline, and parallelism, and thus require special attention to task management and data-oriented resource scheduling and management. The tasks executed during pipeline are considered as bottleneck executions, the failure of which result in the wholly futile execution, which requires a fault-tolerant-aware execution. The tasks executed during parallelism require similar instances of cloud resources, and thus, cluster-based execution may upgrade the system performance in terms of make-span and execution cost. Therefore, this research work presents a cluster-based, fault-tolerant and data-intensive (CFD) scheduling for scientific applications in cloud environments. The CFD strategy addresses the data intensiveness of tasks of scientific workflows with cluster-based, fault-tolerant mechanisms. The Montage scientific workflow is considered as a simulation and the results of the CFD strategy were compared with three well-known heuristic scheduling policies: (a) MCT, (b) Max-min, and (c) Min-min. The simulation results showed that the CFD strategy reduced the make-span by 14.28%, 20.37%, and 11.77%, respectively, as compared with the existing three policies. Similarly, the CFD reduces the execution cost by 1.27%, 5.3%, and 2.21%, respectively, as compared with the existing three policies. In case of the CFD strategy, the SLA is not violated with regard to time and cost constraints, whereas it is violated by the existing policies numerous times.
    Matched MeSH terms: Cloud Computing*
  18. Salih S, Hamdan M, Abdelmaboud A, Abdelaziz A, Abdelsalam S, Althobaiti MM, et al.
    Sensors (Basel), 2021 Dec 15;21(24).
    PMID: 34960483 DOI: 10.3390/s21248391
    Cloud ERP is a type of enterprise resource planning (ERP) system that runs on the vendor's cloud platform instead of an on-premises network, enabling companies to connect through the Internet. The goal of this study was to rank and prioritise the factors driving cloud ERP adoption by organisations and to identify the critical issues in terms of security, usability, and vendors that impact adoption of cloud ERP systems. The assessment of critical success factors (CSFs) in on-premises ERP adoption and implementation has been well documented; however, no previous research has been carried out on CSFs in cloud ERP adoption. Therefore, the contribution of this research is to provide research and practice with the identification and analysis of 16 CSFs through a systematic literature review, where 73 publications on cloud ERP adoption were assessed from a range of different conferences and journals, using inclusion and exclusion criteria. Drawing from the literature, we found security, usability, and vendors were the top three most widely cited critical issues for the adoption of cloud-based ERP; hence, the second contribution of this study was an integrative model constructed with 12 drivers based on the security, usability, and vendor characteristics that may have greater influence as the top critical issues in the adoption of cloud ERP systems. We also identified critical gaps in current research, such as the inconclusiveness of findings related to security critical issues, usability critical issues, and vendor critical issues, by highlighting the most important drivers influencing those issues in cloud ERP adoption and the lack of discussion on the nature of the criticality of those CSFs. This research will aid in the development of new strategies or the revision of existing strategies and polices aimed at effectively integrating cloud ERP into cloud computing infrastructure. It will also allow cloud ERP suppliers to determine organisations' and business owners' expectations and implement appropriate tactics. A better understanding of the CSFs will narrow the field of failure and assist practitioners and managers in increasing their chances of success.
    Matched MeSH terms: Cloud Computing*
  19. Abd Elaziz M, Abualigah L, Ibrahim RA, Attiya I
    Comput Intell Neurosci, 2021;2021:9114113.
    PMID: 34976046 DOI: 10.1155/2021/9114113
    Instead of the cloud, the Internet of things (IoT) activities are offloaded into fog computing to boost the quality of services (QoSs) needed by many applications. However, the availability of continuous computing resources on fog computing servers is one of the restrictions for IoT applications since transmitting the large amount of data generated using IoT devices would create network traffic and cause an increase in computational overhead. Therefore, task scheduling is the main problem that needs to be solved efficiently. This study proposes an energy-aware model using an enhanced arithmetic optimization algorithm (AOA) method called AOAM, which addresses fog computing's job scheduling problem to maximize users' QoSs by maximizing the makespan measure. In the proposed AOAM, we enhanced the conventional AOA searchability using the marine predators algorithm (MPA) search operators to address the diversity of the used solutions and local optimum problems. The proposed AOAM is validated using several parameters, including various clients, data centers, hosts, virtual machines, tasks, and standard evaluation measures, including the energy and makespan. The obtained results are compared with other state-of-the-art methods; it showed that AOAM is promising and solved task scheduling effectively compared with the other comparative methods.
    Matched MeSH terms: Cloud Computing
  20. Paul A, K S V, Sood A, Bhaumik S, Singh KA, Sethupathi S, et al.
    Bull Environ Contam Toxicol, 2022 Dec 13;110(1):7.
    PMID: 36512073 DOI: 10.1007/s00128-022-03638-9
    Presence of suspended particulate matter (SPM) in a waterbody or a river can be caused by multiple parameters such as other pollutants by the discharge of poorly maintained sewage, siltation, sedimentation, flood and even bacteria. In this study, remote sensing techniques were used to understand the effects of pandemic-induced lockdown on the SPM concentration in the lower Tapi reservoir or Ukai reservoir. The estimation was done using Landsat-8 OLI (Operational Land Imager) having radiometric resolution (12-bit) and a spatial resolution of 30 m. The Google Earth Engine (GEE) cloud computing platform was used in this study to generate the products. The GEE is a semi-automated workflow system using a robust approach designed for scientific analysis and visualization of geospatial datasets. An algorithm was deployed, and a time-series (2013-2020) analysis was done for the study area. It was found that the average mean value of SPM in Tapi River during 2020 is lowest than the last seven years at the same time.
    Matched MeSH terms: Cloud Computing
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links