Ubiquitous personal health records, which can accompany a person everywhere, are a necessary requirement for ubiquitous healthcare. Contextual information related to health events is important for the diagnosis and treatment of disease and for the maintenance of good health, yet it is seldom recorded in a health record. We describe a dual cellphone-and-Web-based personal health record system which can include 'external' contextual information. Much contextual information is available on the Internet and we can use ontologies to help identify relevant sites and information. But a search engine is required to retrieve information from the Web and developing a customized search engine is beyond our scope, so we can use Google Custom Search API Web service to get contextual data. In this paper we describe a framework which combines a health-and-environment 'knowledge base' or ontology with the Google Custom Search API to retrieve relevant contextual information related to entries in a ubiquitous personal health record.
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
The Internet of Things (IoT) is leading the physical and digital world of technology to converge. Real-time and massive scale connections produce a large amount of versatile data, where Big Data comes into the picture. Big Data refers to large, diverse sets of information with dimensions that go beyond the capabilities of widely used database management systems, or standard data processing software tools to manage within a given limit. Almost every big dataset is dirty and may contain missing data, mistyping, inaccuracies, and many more issues that impact Big Data analytics performances. One of the biggest challenges in Big Data analytics is to discover and repair dirty data; failure to do this can lead to inaccurate analytics results and unpredictable conclusions. We experimented with different missing value imputation techniques and compared machine learning (ML) model performances with different imputation methods. We propose a hybrid model for missing value imputation combining ML and sample-based statistical techniques. Furthermore, we continued with the best missing value inputted dataset, chosen based on ML model performance for feature engineering and hyperparameter tuning. We used k-means clustering and principal component analysis. Accuracy, the evaluated outcome, improved dramatically and proved that the XGBoost model gives very high accuracy at around 0.125 root mean squared logarithmic error (RMSLE). To overcome overfitting, we used K-fold cross-validation.
Background: Gait recognition is perceived as the most promising biometric approach for future decades especially because of its efficient applicability in surveillance systems. Due to recent growth in the use of gait biometrics across surveillance systems, the ability to rapidly search for the required data has become an emerging need. Therefore, we addressed the gait retrieval problem, which retrieves people with gaits similar to a query subject from a large-scale dataset. Methods: This paper presents the deep gait retrieval hashing (DGRH) model to address the gait retrieval problem for large-scale datasets. Our proposed method is based on a supervised hashing method with a deep convolutional network. We use the ability of the convolutional neural network (CNN) to capture the semantic gait features for feature representation and learn the compact hash codes with the compatible hash function. Therefore, our DGRH model combines gait feature learning with binary hash codes. In addition, the learning loss is designed with a classification loss function that learns to preserve similarity and a quantization loss function that controls the quality of the hash codes Results: The proposed method was evaluated against the CASIA-B, OUISIR-LP, and OUISIR-MVLP benchmark datasets and received the promising result for gait retrieval tasks. Conclusions: The end-to-end deep supervised hashing model is able to learn discriminative gait features and is efficient in terms of the storage memory and speed for gait retrieval.