Continuous gesture recognition can be used to enhance human-computer interaction. This can be accomplished by capturing human movement with the use of the Inertial Measurement Units in smartphones and using machine learning algorithms to predict the intended gestures. Echo State Networks (ESNs) consist of a fixed internal reservoir that is able to generate rich and diverse nonlinear dynamics in response to input signals that capture temporal dependencies within the signal. This makes ESNs well-suited for time series prediction tasks, such as continuous gesture recognition. However, their application has not been rigorously explored, with regard to gesture recognition. In this study, we sought to enhance the efficacy of ESN models in continuous gesture recognition by exploring diverse model structures, fine-tuning hyperparameters, and experimenting with various training approaches. We used three different training schemes that used the Leave-one-out Cross-validation (LOOCV) protocol to investigate the performance in real-world scenarios with different levels of data availability: Leaving out data from one user to use for testing (F1-score: 0.89), leaving out a fraction of data from all users to use in testing (F1-score: 0.96), and training and testing using LOOCV on a single user (F1-score: 0.99). The obtained results outperformed the Long Short-Term Memory (LSTM) performance from past research (F1-score: 0.87) while maintaining a low training time of approximately 13 seconds compared to 63 seconds for the LSTM model. Additionally, we further explored the performance of the ESN models through behaviour space analysis using memory capacity, Kernel Rank, and Generalization Rank. Our results demonstrate that ESNs can be optimized to achieve high performance on gesture recognition in mobile devices on multiple levels of data availability. These findings highlight the practical ability of ESNs to enhance human-computer interaction.
In light of the ongoing COVID-19 pandemic, predicting its trend would significantly impact decision-making. However, this is not a straightforward task due to three main difficulties: temporal autocorrelation, spatial dependency, and concept drift caused by virus mutations and lockdown policies. Although machine learning has been extensively used in related work, no previous research has successfully addressed all three challenges simultaneously. To overcome this challenge, we developed a novel online multi-task regression algorithm that incorporates a chain structure to capture spatial dependency, the ADWIN drift detector to adapt to concept drift, and the lag time series feature to capture temporal autocorrelation. We conducted several comparative experiments based on the number of daily confirmed cases in 20 areas in California and affiliated cities. The results from our experiments demonstrate that our proposed model is superior in adapting to concept drift in COVID-19 data and capturing spatial dependencies across various regions. This leads to a significant improvement in prediction accuracy when compared to existing state-of-the-art batch machine learning methods, such as N-Beats, DeepAR, TCN, and LSTM.