A low-cost, low-power, and low data-rate solution is proposed to fulfill the requirements of information monitoring for actual large-scale agricultural farms. A small-scale farm can be easily managed. By contrast, a large farm will require automating equipment that contributes to crop production. Sensor based soil properties measurement plays an integral role in designing a fully automated agricultural farm, also provides more satisfactory results than any manual method. The existing information monitoring solutions are inefficient in terms of higher deployment cost and limited communication range to adapt the need of large-scale agriculture farms. A serial based low-power, long-range, and low-cost communication module is proposed to confront the challenges of monitoring information over long distances. In the proposed system, a tree-based communication mechanism is deployed to extend the communication range by adding intermediate nodes. Each sensor node consists of a solar panel, a rechargeable cell, a microcontroller, a moisture sensor, and a communication unit. Each node is capable to work as a sensor node and router node for network traffic. Minimized data logs from the central node are sent daily to the cloud for future analytics purpose. After conducting a detailed experiment in open sight, the communication distance measured 250 m between two points and increased to 750 m by adding two intermediate nodes. The minimum working current of each node was 2 mA, and the packet loss rate was approximately 2-5% on different packet sizes of the entire network. Results show that the proposed approach can be used as a reference model to meet the requirements for soil measurement, transmission, and storage in a large-scale agricultural farm.
Classification of human emotions based on electroencephalography (EEG) is a very popular topic nowadays in the provision of human health care and well-being. Fast and effective emotion recognition can play an important role in understanding a patient's emotions and in monitoring stress levels in real-time. Due to the noisy and non-linear nature of the EEG signal, it is still difficult to understand emotions and can generate large feature vectors. In this article, we have proposed an efficient spatial feature extraction and feature selection method with a short processing time. The raw EEG signal is first divided into a smaller set of eigenmode functions called (IMF) using the empirical model-based decomposition proposed in our work, known as intensive multivariate empirical mode decomposition (iMEMD). The Spatio-temporal analysis is performed with Complex Continuous Wavelet Transform (CCWT) to collect all the information in the time and frequency domains. The multiple model extraction method uses three deep neural networks (DNNs) to extract features and dissect them together to have a combined feature vector. To overcome the computational curse, we propose a method of differential entropy and mutual information, which further reduces feature size by selecting high-quality features and pooling the k-means results to produce less dimensional qualitative feature vectors. The system seems complex, but once the network is trained with this model, real-time application testing and validation with good classification performance is fast. The proposed method for selecting attributes for benchmarking is validated with two publicly available data sets, SEED, and DEAP. This method is less expensive to calculate than more modern sentiment recognition methods, provides real-time sentiment analysis, and offers good classification accuracy.
Electrocardiogram (ECG) signals play a vital role in diagnosing and monitoring patients suffering from various cardiovascular diseases (CVDs). This research aims to develop a robust algorithm that can accurately classify the electrocardiogram signal even in the presence of environmental noise. A one-dimensional convolutional neural network (CNN) with two convolutional layers, two down-sampling layers, and a fully connected layer is proposed in this work. The same 1D data was transformed into two-dimensional (2D) images to improve the model's classification accuracy. Then, we applied the 2D CNN model consisting of input and output layers, three 2D-convolutional layers, three down-sampling layers, and a fully connected layer. The classification accuracy of 97.38% and 99.02% is achieved with the proposed 1D and 2D model when tested on the publicly available Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Both proposed 1D and 2D CNN models outperformed the corresponding state-of-the-art classification algorithms for the same data, which validates the proposed models' effectiveness.
Emotional awareness perception is a largely growing field that allows for more natural interactions between people and machines. Electroencephalography (EEG) has emerged as a convenient way to measure and track a user's emotional state. The non-linear characteristic of the EEG signal produces a high-dimensional feature vector resulting in high computational cost. In this paper, characteristics of multiple neural networks are combined using Deep Feature Clustering (DFC) to select high-quality attributes as opposed to traditional feature selection methods. The DFC method shortens the training time on the network by omitting unusable attributes. First, Empirical Mode Decomposition (EMD) is applied as a series of frequencies to decompose the raw EEG signal. The spatiotemporal component of the decomposed EEG signal is expressed as a two-dimensional spectrogram before the feature extraction process using Analytic Wavelet Transform (AWT). Four pre-trained Deep Neural Networks (DNN) are used to extract deep features. Dimensional reduction and feature selection are achieved utilising the differential entropy-based EEG channel selection and the DFC technique, which calculates a range of vocabularies using k-means clustering. The histogram characteristic is then determined from a series of visual vocabulary items. The classification performance of the SEED, DEAP and MAHNOB datasets combined with the capabilities of DFC show that the proposed method improves the performance of emotion recognition in short processing time and is more competitive than the latest emotion recognition methods.
Affective Computing is one of the central studies for achieving advanced human-computer interaction and is a popular research direction in the field of artificial intelligence for smart healthcare frameworks. In recent years, the use of electroencephalograms (EEGs) to analyze human emotional states has become a hot spot in the field of emotion recognition. However, the EEG is a non-stationary, non-linear signal that is sensitive to interference from other physiological signals and external factors. Traditional emotion recognition methods have limitations in complex algorithm structures and low recognition precision. In this article, based on an in-depth analysis of EEG signals, we have studied emotion recognition methods in the following respects. First, in this study, the DEAP dataset and the excitement model were used, and the original signal was filtered with others. The frequency band was selected using a butter filter and then the data was processed in the same range using min-max normalization. Besides, in this study, we performed hybrid experiments on sash windows and overlays to obtain an optimal combination for the calculation of features. We also apply the Discrete Wave Transform (DWT) to extract those functions from the preprocessed EEG data. Finally, a pre-trained k-Nearest Neighbor (kNN) machine learning model was used in the recognition and classification process and different combinations of DWT and kNN parameters were tested and fitted. After 10-fold cross-validation, the precision reached 86.4%. Compared to state-of-the-art research, this method has higher recognition accuracy than conventional recognition methods, while maintaining a simple structure and high speed of operation.