OBJECTIVES: This paper discusses activity detection and analysis (ADA) using security robots in workplaces. The application scenario of this method relies on processing image and sensor data for event and activity detection. The events that are detected are classified for its abnormality based on the analysis performed using the sensor and image data operated using a convolution neural network. This method aims to improve the accuracy of detection by mitigating the deviations that are classified in different levels of the convolution process.
RESULTS: The differences are identified based on independent data correlation and information processing. The performance of the proposed method is verified for the three human activities, such as standing, walking, and running, as detected using the images and sensor dataset.
CONCLUSION: The results are compared with the existing method for metrics accuracy, classification time, and recall.
OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.
METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.
RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.
CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.