Secure updating and sharing for large amounts of healthcare information (such as medical data on coronavirus disease 2019 [COVID-19]) in efficient and secure transmission are important but challenging in communication channels amongst hospitals. In particular, in addressing the above challenges, two issues are faced, namely, those related to confidentiality and integrity of their health data and to network failure that may cause concerns about data availability. To the authors' knowledge, no study provides secure updating and sharing solution for large amounts of healthcare information in communication channels amongst hospitals. Therefore, this study proposes and discusses a novel steganography-based blockchain method in the spatial domain as a solution. The novelty of the proposed method is the removal and addition of new particles in the particle swarm optimisation (PSO) algorithm. In addition, hash function can hide secret medical COVID-19 data in hospital databases whilst providing confidentiality with high embedding capacity and high image quality. Moreover, stego images with hash data and blockchain technology are used in updating and sharing medical COVID-19 data between hospitals in the network to improve the level of confidentiality and protect the integrity of medical COVID-19 data in grey-scale images, achieve data availability if any connection failure occurs in a single point of the network and eliminate the central point (third party) in the network during transmission. The proposed method is discussed in three stages. Firstly, the pre-hiding stage estimates the embedding capacity of each host image. Secondly, the secret COVID-19 data hiding stage uses PSO algorithm and hash function. Thirdly, the transmission stage transfers the stego images based on blockchain technology and updates all nodes (hospitals) in the network. As proof of concept for the case study, the authors adopted the latest COVID-19 research published in the Computer Methods and Programs in Biomedicine journal, which presents a rescue framework within hospitals for the storage and transfusion of the best convalescent plasma to the most critical patients with COVID-19 on the basis of biological requirements. The validation and evaluation of the proposed method are discussed.
A growing amount of research conducted in digital, cooperative with advances in Artificial Intelligence, Computer Vision including Machine learning, has managed to the advance of progressive techniques that aim to detect and process affective information contained in multi-modal evidences. This research intends to bring together for theoreticians and practitioners from academic fields, professionals and industries and extends to be visualizing cries such epidemic, votes, social Phenomena in spherical representation interactive model working in the broad range of topics relevant to multi - modal data processing and forensics tools developing. Furthermore, progress has been made in this research besides that in this research conducted progression of mapping claims in present epoch necessitate the capacities of virtual guide of any understandable Geo-Visualization of spatial features that talented to convert the quantities of spatial pattern into cartography. The enlargement of a novel approaches fit for visualization of spatial pattern constituencies Starting exclusive Input Set of object O, set associated with feature F for regenerating Output the set C , interested region I special target C Even so, as indicated by the construction of the prototype as listed earlier in this thread, does it have the incentive for improvements: Representation could be used by Google Earth can Using Project enhancement representation whereby provides a 3D or 4D interaction with life measures with a view to cartography. In addition, the initiative suggests that a tool not accessible for disseminating information to the public can be addressed by the use of online mapping, which fuses with trends visualization for political circles and electors. But as mentioned above the framework is developed and it's also possible in the current example, for improvements: The project's representation 3D or 4D interacting Earth can use measures of life Earth From the map viewpoint. That's what that says. That means that. Which just means. Developers have concerns that. So it. Designers concern about that. This study supports the new, multi - demission and deployed countries in conjunction with another data is processed. Comprehensive, well-interpreted source data for the Data like Malaysia Jabatan Pendaftaran (JPN).
With the rapid advancement in digital technologies, video rises to become one of the most effective communication tools that continues to gain popularity and importance. As a result, various proposals are put forward to manage videos, and one of them is data embedding. Essentially, data embedding inserts data into the video to serve a specific purpose, including proof of ownership via watermark, covert communication in steganography, and authentication via fragile watermark. However, most conventional methods embed data by using only one type of syntax element defined in the video coding standard, which may suffer from large bit rate overhead, quality degradation, or low payload. Therefore, this work aims to explore the combined use of multiple prediction syntax elements in SHVC for the purpose of data embedding. Specifically, the intra prediction mode, motion vector predictor, motion vector difference, merge mode and coding block structure are collectively manipulated to embed data. The experimental results demonstrate that, in comparison to the conventional single-venue data embedding methods, the combined use of prediction syntax elements can achieve higher payload while preserving the perceptual quality with minimal bit rate variation. In the best case scenario, a total of 556.1 kbps is embedded into the video sequence PartyScene with a drop of 0.15 dB in PSNR while experiencing a bit rate overhead of 7.4% when all prediction syntax elements are utilized altogether. A recommendation is then put forward to choose specific types of syntax element for data embedding based on the characteristics of the video.
Over the past several years, mobile learning concepts have changed the way people perceived on mobile devices and technology in the learning environment. In earlier days, mobile devices were used mainly for communication purposes. Later, with many new advanced features of mobile devices, they have opened the opportunity for individuals to use them as mediated technology in learning. The traditional way of teaching and learning has shifted into a new learning dimension, where an individual can execute learning and teaching everywhere and anytime. Mobile learning has encouraged lifelong learning, in which everyone can have the opportunity to use mobile learning applications to gain knowledge. However, many of the previous studies on mobile learning have focused on the young and older adults, and less intention on middle-aged adults. In this research, it is targeted for the middle-aged adults which are described as those who are between the ages of 40 to 60. Middle-aged adults typically lead very active lives while at the same time are also very engaged in self-development programs aimed at enhancing their spiritual, emotional, and physical well-being. In this paper, we investigate the methodology used by researchers based on the research context namely, acceptance, adoption, effectiveness, impact, intention of use, readiness, and usability of mobile learning. The research context was coded to the identified methodologies found in the literature. This will help one to understand how mobile learning can be effectively implemented for middle-aged adults in future work. A systematic review was performed using EBSCO Discovery Service, Science Direct, Google Scholar, Scopus, IEEE and ACM databases to identify articles related to mobile learning adoption. A total of 65 journal articles were selected from the years 2016 to 2021 based on Kitchenham systematic review methodology. The result shows there is a need to strengthen research in the field of mobile learning with middle-aged adults.
Since its inception, YouTube has been a source of entertainment and education. Everyday millions of videos are uploaded to this platform. Researchers have been using YouTube as a source of information in their research. However, there is a lack of bibliometric reports on research carried out on this platform and the pattern in the published works. This study aims at providing a bibliometric analysis on YouTube as a source of information to fill this gap. Specifically, this paper analyzes 1781 articles collected from the Scopus database spanning fifteen years. The analysis revealed that 2006-2007 were initial stage in YouTube research followed by 2008 -2017 which is the decade of rapid growth in YouTube research. The 2017 -2021 is considered the stage of consolidation and stabilization of this research topic. We also discovered that most relevant papers were published in small number of journals such as New Media and Society, Convergence, Journal of Medical Internet Research, Computers in Human Behaviour and the Physics Teacher, which proves the Bradford's law. USA, Turkey, and UK are the countries with the highest number of publications. We also present network analysis between countries, sources, and authors. Analyzing the keywords resulted in finding the trend in research such as "video sharing" (2010-2018), "web -based learning" (2012-2014), and "COVID -19" (2020 onward). Finally, we used Multiple Correspondence Analysis (MCA) to find the conceptual clusters of research on YouTube. The first cluster is related to user -generated content. The second cluster is about health and medical issues, and the final cluster is on the topic of information quality.
Modern medical examinations have produced a large number of medical images. It is a great challenge to transmit and store them quickly and securely. Existing solutions mainly use medical image encryption algorithms, but these encryption algorithms, which were developed for ordinary images, are time-consuming and must cope with insufficient security considerations when encrypting medical images. Compared with ordinary images, medical images can be divided into the region of interest and the region of background. In this paper, based on this characteristic, a plain-image correlative semi-selective medical image encryption algorithm using the enhanced two dimensional Logistic map was proposed. First, the region of interest of a plain medical image is permuted at the pixel level, then for the whole medical image, substitution is performed pixel by pixel. An ideal compromise between encryption speed and security can be achieved by full-encrypting the region of interest and semi-encrypting the region of background. Several main types of medical images and some normal images were selected as the samples for simulation, and main image cryptanalysis methods were used to analyze the results. The results showed that the cipher-images have a good visual quality, high information entropy, low correlation between adjacent pixels, as well as uniformly distribute histogram. The algorithm is sensitive to the initial key and plain-image, and has a large keyspace and low time complexity. The time complexity is lower when compared with the current medical image full encryption algorithm, and the security performance is better when compared with the current medical image selective encryption algorithm.
In general, making evaluations requires a lot of time, especially in thinking about the questions and answers. Therefore, research on automatic question generation is carried out in the hope that it can be used as a tool to generate question and answer sentences, so as to save time in thinking about questions and answers. This research focuses on automatically generating short answer questions in the reading comprehension section using Natural Language Processing (NLP) and K-Nearest Neighborhood (KNN). The questions generated use article sources from news with reliable grammar. To maintain the quality of the questions produced, machine learning methods are also used, namely by conducting training on existing questions. The stages of this research in outline are simple sentence extraction, problem classification, generating question sentences, and finally comparing candidate questions with training data to determine eligibility. The results of the experiment carried out were for the Grammatical Correctness parameter to produce a percentage of 59.52%, for the Answer Existence parameter it yielded 95.24%, while for the Difficulty Index parameter it produced a percentage of 34.92%. So that the resulting average is 63.23%. So, this software deserves to be used as an alternative to automatically create reading comprehension questions.
Traditionally, dengue is controlled by fogging, and the prime location for the control measure is at the patient's residence. However, when Malaysia was hit by the first wave of the Coronavirus disease (COVID-19), and the government-imposed movement control order, dengue cases have decreased by more than 30% from the previous year. This implies that residential areas may not be the prime locations for dengue-infected mosquitoes. The existing early warning system was focused on temporal prediction wherein the lack of consideration for spatial component at the microlevel and human mobility were not considered. Thus, we developed MozzHub, which is a web-based application system based on the bipartite network-based dengue model that is focused on identifying the source of dengue infection at a small spatial level (400 m) by integrating human mobility and environmental predictors. The model was earlier developed and validated; therefore, this study presents the design and implementation of the MozzHub system and the results of a preliminary pilot test and user acceptance of MozzHub in six district health offices in Malaysia. It was found that the MozzHub system is well received by the sample of end-users as it was demonstrated as a useful (77.4%), easy-to-operate system (80.6%), and has achieved adequate client satisfaction for its use (74.2%).
After several waves of COVID-19 led to a massive loss of human life worldwide due to the changes in its variants and the vast explosion. Several researchers proposed neural network-based drug discovery techniques to fight against the pandemic; utilizing neural networks has limitations (Exponential time complexity, Non-Convergence, Mode Collapse, and Diminished Gradient). To overcome those difficulties, this paper proposed a hybrid architecture that will help to repurpose the most appropriate medicines for the treatment of COVID-19. A brief investigation of the sequences has been made to discover the gene density and noncoding proportion through the next gene sequencing. The paper tracks the exceptional locales in the virus DNA sequence as a Drug Target Region (DTR). Then the variable DNA neighborhood search is applied to this DTR to obtain the DNA interaction network to show how the genes are correlated. A drug database has been obtained based on the ontological property of the genomes with advanced D3Similarity so that all the chemical components of the drug database have been identified. Other methods obtained hydroxychloroquine as an effective drug which was rejected by WHO. However, The experimental results show that Remdesivir and Dexamethasone are the most effective drugs, with 97.41 and 97.93%, respectively.
This paper presents an online system for recording attendance based on facial recognition incorporating facial mask detection. The main objective of this project is to develop an effective attendance system based on face recognition and face mask detection, and to provide this service online through a browser interface. This would allow any user to use this system without the need to install special software. They simply need to open the interface of this system in a browser through any terminal. Recording attendance information online allows data to be easily recorded in a centralized online database. Since faces are used as biometric signatures in this project, all users registered in the system will have their profiles loaded with their face-images samples. Initially, before face recognition can be done, the model training phase based on SVM will be carried out, mainly to develop a trained model that can perform face recognition. A set of synthetic data will also be used to train the same model so that it can perform identification for users wearing face masks. The server application is coded in Python and uses the Open-Source Computer Vision (OpenCV) library for image processing. For web interfaces and the database, PHP and MySQL are used. With the integration of Python and PHP scripting programs, the developed system will be able to perform processing on online servers, while being accessible to users through a browser from any terminal. According to the results and analysis, an accuracy of about 81.8% can be achieved based on a pre-trained model for face recognition and 80% for face mask detection.
The method is providing and overview of the organization in the management perspective, within the health big data analysis, especially for the elderly employees, the organizations could sign the elderly employees within the right tasks, it reducing the costs by increasing the employees' job performance and organization performance. By addressing the importance role of big health data analytics (BDHA) in the healthcare system .moreover BDHA enables a patient's medical records to be searched in a dynamic, interactive manner. One billion records were made in two hours. Current clinical reporting compares large health data profiles and meta-big health data, giving health apps basic interfaces. A combination of Hadoop/MapReduce and HBase was used to generate the necessary hospital-specific large heath data. One billion (10TB) and three billion (30TB) HBase large health data files might be created in a week or a month using the concept. Apache Hadoop technologies tested simulated medical records. Inconsistencies reduced big health data. An encounter-centered big health database was difficult to set up due to the complicated medical system connections between big health data profiles. Associated with job performance such as the gender, current/past job positions and the health conditions are important. For genders the 66.36% of respondents in the experiments are females, 33.64 are males, majority of are healthy which are 66.97%, 30.58% are common geriatric disease, rest 2.45% are suffering from occupational disease; In terms of the current/past job positions, 20% of the respondents are working as accountant, followed by sales and management level. The Diagnostic and Statistical Manual, lists 157 distinct illnesses. Individuals may be diagnosed with one or more illnesses as a consequence of medical health professionals watching and analyzing their symptoms. It has been discovered that mental health issues have a negative impact on employees' job performance. For example, research on individuals with anxiety and depression has a direct impact on concentrations, decision-making process, and risk-taking behavior, which can be determined for job performance. Machine learning focuses on approaches that can be used to create accurate predictions about future characteristics based on previous training and post training. Principles such as job task and computational learning are crucial for machine learning algorithms that use a large amount of big health data.
The Coronavirus disease 2019, or COVID-19, has shifted the medical paradigm from face-to-face to telehealth. Telehealth has become a vital resource to contain the virus spread and ensure the continued care of patients. In terms of preventing cardiovascular diseases, automating electrocardiogram (ECG) classification is a promising telehealth intervention. The healthcare service ensures that patient care is appropriate, comfortable, and accessible. Convolutional neural networks (CNNs) have demonstrated promising results in ECG categorization, which require high accuracy and short training time to ensure healthcare quality. This study proposes a one-dimensional-CNN (1D-CNN) arrhythmia classification based on the differential evolution (DE) algorithm to optimize the accuracy of ECG classification and training time. The performance of 1D-CNNs of different activation functions are optimized based on the standard DE algorithm. Finally, based on MIT-BIH and SCDH arrhythmia databases, the performances of optimized and unoptimized 1D-CNN are compared and analysed. Results show that the 1D-CNN optimized by the DE has higher accuracy in heartbeats classification. The optimized 1D-CNN improves from 97.6% to 99.5% on MIT-BIH and from 80.2% to 88.5% on SCDH. Therefore, the optimized 1D-CNN shows improvements of 1.9% and 8.3% in the two datasets, respectively. In addition, compared with the unoptimized 1D-CNN based on the same parameter settings, the optimized 1D-CNN has less training time. Under the conditions of ReLU function and 10 epochs, the training takes 9.22 s on MIT-BIH and 10.35 s on SCDH, reducing training time by 67.2% and 64.2%, respectively.
The spherical fuzzy set (SFS) model is one of the newly developed extensions of fuzzy sets (FS) for the purpose of dealing with uncertainty or vagueness in decision making. The aim of this paper is to define new exponential and Einstein exponential operational laws for spherical fuzzy sets and their corresponding aggregation operators. We introduce the operational laws for exponential and Einstein exponential SFSs in which the base values are crisp numbers and the exponents (weights) are spherical fuzzy numbers. Some of the properties and characteristics of the proposed operations are then discussed. Based on these operational laws, some new aggregation operators for the SFS model, namely Spherical Fuzzy Weighted Exponential Averaging (SFWEA) and Spherical Fuzzy Einstein Weighted Exponential Averaging (SFEWEA) operators are introduced. Finally, a decision-making algorithm based on these newly introduced aggregation operators is proposed and applied to a multi-criteria decision making (MCDM) problem related to ranking different types of psychotherapy.
Because mobile technology and the widespread usage of mobile devices have swiftly and radically evolved, several training centers have started to offer mobile training (m-training) via mobile devices. Thus, designing suitable m-training course content for training employees via mobile device applications has become an important professional development issue to allow employees to obtain knowledge and improve their skills in the rapidly changing mobile environment. Previous studies have identified challenges in this domain. One important challenge is that no solid theoretical framework serves as a foundation to provide instructional design guidelines for interactive m-training course content that motivates and attracts trainees to the training process via mobile devices. This study proposes a framework for designing interactive m-training course content using mobile augmented reality (MAR). A mixed-methods approach was adopted. Key elements were extracted from the literature to create an initial framework. Then, the framework was validated by interviewing experts, and it was tested by trainees. This integration led us to evaluate and prove the validity of the proposed framework. The framework follows a systematic approach guided by six key elements and offers a clear instructional design guideline checklist to ensure the design quality of interactive m-training course content. This study contributes to the knowledge by establishing a framework as a theoretical foundation for designing interactive m-training course content. Additionally, it supports the m-training domain by assisting trainers and designers in creating interactive m-training courses to train employees, thus increasing their engagement in m-training. Recommendations for future studies are proposed.
Waste generation in smart cities is a critical issue, and the interim steps towards its management were not that effective. But at present, the challenge of meeting recycling requirements due to the practical difficulty involved in waste sorting decelerates smart city CE vision. In this paper, a digital model that automatically sorts the generated waste and classifies the type of waste as per the recycling requirements based on an artificial neural network (ANN) and features fusion techniques is proposed. In the proposed model, various features extracted using image processing are combined to develop a sophisticated classifier. Based on the different features, different models are built, and each model produces a single decision. Besides, the kind of class is determined using machine learning. The model is validated by extracting relevant information from the dataset containing 2400 images of possible waste types recycled across three categories. Based on the analysis, it is observed that the proposed model achieved an accuracy of 91.7%, proving its ability to sort and classify the waste as per the recycling requirements automatically. Overall, this analysis suggests that a digital-enabled CE vision could improve the waste sorting services and recycling decisions across the value chain in smart cities.
The COVID-19 virus has caused a worldwide pandemic, affecting numerous individuals and accounting for more than a million deaths. The countries of the world had to declare complete lockdown when the coronavirus led to community spread. Although the real-time Polymerase Chain Reaction (RT-PCR) test is the gold-standard test for COVID-19 screening, it is not satisfactorily accurate and sensitive. On the other hand, Computer Tomography (CT) scan images are much more sensitive and can be suitable for COVID-19 detection. To this end, in this paper, we develop a fully automated method for fast COVID-19 screening by using chest CT-scan images employing Deep Learning techniques. For this supervised image classification problem, a bootstrap aggregating or Bagging ensemble of three transfer learning models, namely, Inception v3, ResNet34 and DenseNet201, has been used to boost the performance of the individual models. The proposed framework, called ET-NET, has been evaluated on a publicly available dataset, achieving 97.81 ± 0.53 % accuracy, 97.77 ± 0.58 % precision, 97.81 ± 0.52 % sensitivity and 97.77 ± 0.57 % specificity on 5-fold cross-validation outperforming the state-of-the-art method on the same dataset by 1.56%. The relevant codes for the proposed approach are accessible in: https://github.com/Rohit-Kundu/ET-NET_Covid-Detection.
The novel coronavirus disease, which originated in Wuhan, developed into a severe public health problem worldwide. Immense stress in the society and health department was advanced due to the multiplying numbers of COVID carriers and deaths. This stress can be lowered by performing a high-speed diagnosis for the disease, which can be a crucial stride for opposing the deadly virus. A good large amount of time is consumed in the diagnosis. Some applications that use medical images like X-Rays or CT-Scans can pace up the time used in diagnosis. Hence, this paper aims to create a computer-aided-design system that will use the chest X-Ray as input and further classify it into one of the three classes, namely COVID-19, viral Pneumonia, and healthy. Since the COVID-19 positive chest X-Rays dataset was low, we have exploited four pre-trained deep neural networks (DNNs) to find the best for this system. The dataset consisted of 2905 images with 219 COVID-19 cases, 1341 healthy cases, and 1345 viral pneumonia cases. Out of these images, the models were evaluated on 30 images of each class for the testing, while the rest of them were used for training. It is observed that AlexNet attained an accuracy of 97.6% with an average precision, recall, and F1 score of 0.98, 0.97, and 0.98, respectively.