Social media has been tremendously used worldwide for a variety of purposes. Therefore, engagement activities such as comments have attracted many scholars due its ability to reveal many critical findings, such as the role of users' sentiment. However, there is a lacuna on how to detect crisis based on users' sentiment through comments, and for such, we explore framing theory in the study herein to determine users' sentiment in predicting crisis. Generic content framing theory consists of conflict, economic, human interest, morality, and responsibility attributes frame as independent variables whilst sentiment as dependent variables. Comments from selected Facebook posting case studies were extracted and analysed using sentiment analysis via Application Programme Interface (API) webtool. The comments were then further analysed using content analysis via Positive and Negative Affect Schedule (PANAS) scale and statistically evaluated using SEM-PLS. Model shows that 44.8% of emotion and reactions towards sensitive issue posting are influenced by independent variables. Only economic consequences and responsibility attributes frame had correlation towards emotion and reaction at p<0.05. News reporting on direction towards economic and responsibility attributes sparks negative sentiment, which proves that it can best be described as pre-crisis detection to assist the Royal Malaysian Police and other relevant stakeholders to prevent criminal activities in their respective social media.
Recent advances in imaging technologies, such as intra-oral surface scanning, have rapidly generated large datasets of high-resolution three-dimensional (3D) sample reconstructions. These datasets contain a wealth of phenotypic information that can provide an understanding of morphological variation and evolution. The geometric morphometric method (GMM) with landmarks and the development of sliding and surface semilandmark techniques has greatly enhanced the quantification of shape. This study aimed to determine whether there are significant differences in 3D palatal rugae shape between siblings. Digital casts representing 25 pairs of full siblings from each group, male-male (MM), female-female (FF), and female-male (FM), were digitized and transferred to a GM system. The palatal rugae were determined, quantified, and visualized using GMM computational tools with MorphoJ software (University of Manchester). Principal component analysis (PCA) and canonical variates analysis (CVA) were employed to analyze palatal rugae shape variability and distinguish between sibling groups based on shape. Additionally, regression analysis examined the potential impact of shape on palatal rugae. The study revealed that the palatal rugae shape covered the first nine of the PCA by 71.3%. In addition, the size of the palatal rugae has a negligible impact on its shape. Whilst palatal rugae are known for their individuality, it is noteworthy that three palatal rugae (right first, right second, and left third) can differentiate sibling groups, which may be attributed to genetics. Therefore, it is suggested that palatal rugae morphology can serve as forensic identification for siblings.
The importance of incorporating an agile approach into creating sustainable products has been widely discussed. This approach can enhance innovation integration, improve adaptability to changing development circumstances, and increase the efficiency and quality of the product development process. While many agile methods have originated in the software development context and have been formulated based on successful software projects, they often fail due to incorrect procedures and a lack of acceptance, preventing deep integration into the process. Additionally, decision-making for market evaluation is often hindered by unclear and subjective information. Therefore, this study introduces an extended TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) method for sustainable product development. This method leverages the benefits of cloud model theory to address randomness and uncertainty (intrapersonal uncertainty) and the advantages of rough set theory to flexibly handle market demand uncertainty without requiring extra information. The study proposes an integrated weighting method that considers both subjective and objective weights to determine comprehensive criteria weights. It also presents a new framework, named Sustainable Agility of Product Development (SAPD), which aims to evaluate criteria for assessing sustainable product development. To validate the effectiveness of this proposed method, a case study is conducted on small and medium enterprises in China. The obtained results show that the company needs to conduct product structure research and development to realize new product functions.
Software Defined Network (SDN) has alleviated traditional network limitations but faces a significant challenge due to the risk of Distributed Denial of Service (DDoS) attacks against an SDN controller, with current detection methods lacking evaluation on unrealistic SDN datasets and standard DDoS attacks (i.e., high-rate DDoS attack). Therefore, a realistic dataset called HLD-DDoSDN is introduced, encompassing prevalent DDoS attacks specifically aimed at an SDN controller, such as User Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). This SDN dataset also incorporates diverse levels of traffic fluctuations, representing different traffic variation rates (i.e., high and low rates) in DDoS attacks. It is qualitatively compared to existing SDN datasets and quantitatively evaluated across all eight scenarios to ensure its superiority. Furthermore, it fulfils the requirements of a benchmark dataset in terms of size, variety of attacks and scenarios, with significant features that highly contribute to detecting realistic SDN attacks. The features of HLD-DDoSDN are evaluated using a Deep Multilayer Perception (D-MLP) based detection approach. Experimental findings indicate that the employed features exhibit high performance in the detection accuracy, recall, and precision of detecting high and low-rate DDoS flooding attacks.
Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages.
This study has two objectives. First, it aims to develop a system with a highly secured approach to transmitting electronic medical records (EMRs), and second, it aims to identify entities that transmit private patient information without permission. The NTRU and the Advanced Encryption Standard (AES) cryptosystems are secured encryption methods. The AES is a tested technology that has already been utilized in several systems to secure sensitive data. The United States government has been using AES since June 2003 to protect sensitive and essential information. Meanwhile, NTRU protects sensitive data against attacks through the use of quantum computers, which can break the RSA cryptosystem and elliptic curve cryptography algorithms. A hybrid of AES and NTRU is developed in this work to improve EMR security. The proposed hybrid cryptography technique is implemented to secure the data transmission process of EMRs. The proposed security solution can provide protection for over 40 years and is resistant to quantum computers. Moreover, the technique provides the necessary evidence required by law to identify disclosure or misuse of patient records. The proposed solution can effectively secure EMR transmission and protect patient rights. It also identifies the source responsible for disclosing confidential patient records. The proposed hybrid technique for securing data managed by institutional websites must be improved in the future.
Deformation of quay walls is one of the main sources of damage to port facility while liquefaction of backfill and base soil of the wall are the main reasons for failures of quay walls. During earthquakes, the most susceptible materials for liquefaction in seashore regions are loose saturated sand. In this study, effects of enhancing the wall width and the soil improvement on the behavior of gravity quay walls are examined in order to obtain the optimum improved region. The FLAC 2D software was used for analyzing and modeling progressed models of soil and loading under difference conditions. Also, the behavior of liquefiable soil is simulated by the use of "Finn" constitutive model in the analysis models. The "Finn" constitutive model is especially created to determine liquefaction phenomena and excess pore pressure generation.
Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.
With the advent of high-throughput sequencing technologies, many staphylococcal genomes have been sequenced. Comparative analysis of these strains will provide better understanding of their biology, phylogeny, virulence and taxonomy, which may contribute to better management of diseases caused by staphylococcal pathogens. We developed StaphyloBase with the goal of having a one-stop genomic resource platform for the scientific community to access, retrieve, download, browse, search, visualize and analyse the staphylococcal genomic data and annotations. We anticipate this resource platform will facilitate the analysis of staphylococcal genomic data, particularly in comparative analyses. StaphyloBase currently has a collection of 754 032 protein-coding sequences (CDSs), 19 258 rRNAs and 15 965 tRNAs from 292 genomes of different staphylococcal species. Information about these features is also included, such as putative functions, subcellular localizations and gene/protein sequences. Our web implementation supports diverse query types and the exploration of CDS- and RNA-type information in detail using an AJAX-based real-time search system. JBrowse has also been incorporated to allow rapid and seamless browsing of staphylococcal genomes. The Pairwise Genome Comparison tool is designed for comparative genomic analysis, for example, to reveal the relationships between two user-defined staphylococcal genomes. A newly designed Pathogenomics Profiling Tool (PathoProT) is also included in this platform to facilitate comparative pathogenomics analysis of staphylococcal strains. In conclusion, StaphyloBase offers access to a range of staphylococcal genomic resources as well as analysis tools for comparative analyses. Database URL: http://staphylococcus.um.edu.my/.
Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data.
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
We describe a server that allows the interrogation of the Protein Data Bank for hypothetical 3D side chain patterns that are not limited to known patterns from existing 3D structures. A minimal side chain description allows a variety of side chain orientations to exist within the pattern, and generic side chain types such as acid, base and hydroxyl-containing can be additionally deployed in the search query. Moreover, only a subset of distances between the side chains need be specified. We illustrate these capabilities in case studies involving arginine stacks, serine-acid group arrangements and multiple catalytic triad-like configurations. The IMAAAGINE server can be accessed at http://mfrlab.org/grafss/imaaagine/.
Similarities in the 3D patterns of RNA base interactions or arrangements can provide insights into their functions and roles in stabilization of the RNA 3D structure. Nucleic Acids Search for Substructures and Motifs (NASSAM) is a graph theoretical program that can search for 3D patterns of base arrangements by representing the bases as pseudo-atoms. The geometric relationship of the pseudo-atoms to each other as a pattern can be represented as a labeled graph where the pseudo-atoms are the graph's nodes while the edges are the inter-pseudo-atomic distances. The input files for NASSAM are PDB formatted 3D coordinates. This web server can be used to identify matches of base arrangement patterns in a query structure to annotated patterns that have been reported in the literature or that have possible functional and structural stabilization implications. The NASSAM program is freely accessible without any login requirement at http://mfrlab.org/grafss/nassam/.
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
Combined Support Vector Machine (SVM) and Principal Component Analysis (PCA) was used to recognize the infant cries with asphyxia. SVM classifier based on features selected by the PCA was trained to differentiate between pathological and healthy cries. The PCA was applied to reduce dimensionality of the vectors that serve as inputs to the SVM. The performance of the SVM utilizing linear and RBF kernel was examined. Experimental results showed that SVM with RBF kernel yields good performance. The classification accuracy in classifying infant cry with asphyxia using the SVM-PCA is 95.86%.
A new filter is developed for the enhancement of scanning electron microscope (SEM) images. A mixed Lagrange time delay estimation auto-regression (MLTDEAR)-based interpolator is used to provide an estimate of noise variance to a standard Wiener filter. A variety of images are captured and the performance of the filter is shown to surpass the conventional noise filters. As all the information required for processing is extracted from a single image, this method is not constrained by image registration requirements and thus can be applied in real-time in cases where specimen drift is presented in the SEM image.
Little is known about the spatial and temporal distribution of blast fishing which hampers enforcement against this activity. We have demonstrated that a triangular array of hydrophones 1 m apart is capable of detecting blast events whilst effectively rejecting other sources of underwater noise such as snapping shrimp and nearby boat propellers. A total of 13 blasts were recorded in Sepangor bay, North of Kota Kinabalu, Sabah, Malaysia from 7th to 15th July 2002 at distances estimated to be up to 20 km, with a directional uncertainty of 0.2 degrees . With such precision, a network of similar hydrophone arrays has potential to locate individual blast events by triangulation to within 30 m at a range of 10 km.
Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using compute unified device architecture, extending the previously proposed SurLens Visualization and computer aided hepatocellular carcinoma frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, USA. Significantly, our proposed framework is able to generate and extract points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.
The healthcare enterprise requires a great deal of knowledge to maintain premium efficiency in the delivery of quality healthcare. We employ Knowledge Management based knowledge acquisition strategies to procure 'tacit' healthcare knowledge from experienced healthcare practitioners. Situational, problem-specific Scenarios are proposed as viable knowledge acquisition and representation constructs. We present a healthcare Tacit Knowledge Acquisition Info-structure (TKAI) that allows remote healthcare practitioners to record their tacit knowledge. TKAI employs (a) ontologies for standardisation of tacit knowledge and (b) XML to represent scenario instances for their transfer over the Internet to the server-side Scenario-Base and for the global sharing of acquired tacit healthcare knowledge.
Health-Level (HL) 7 message semantics allows effective functional implementation of Electronic Medical Record (EMR)--encompassing both clinical and administrative (i.e. demographic and financial) information--interchange systems, at the expense of complexity with respect the Protocol Data Unit (PDU) structure and the client-side application architecture. In this paper we feature the usage of the Extensible Markup Language (XML) document-object modelling and Java client-server connectivity towards the implementation of a Web-based system for EMR transaction processing. Our solution features an XML-based description of EMR templates, which are subsequently transcribed into a Hypertext Markup Language (HTML)-Javascript form. This allows client-side user interfaceability and server-side functionality--i.e. message validation, authentication and database connectivity--to be handled through standard Web client-server mechanisms, the primary assumption being availability of a browser capable of XML documents and the associated stylesheets. We assume usage of the Internet as the interchange medium, hence the necessity for authentication and data privacy mechanisms, both of which can be constructed using standard Java-based building blocks.