Action Recognition in Video using a Spatial-Temporal Graph-based Feature Representation
Publication Type:
Refereed Conference Meeting Proceeding
Abstract:
We propose a video graph based human action recogni- tion framework. Given an input video sequence, we extract spatio-temporal local features and construct a video graph to incorporate appearance and motion constraints to reflect the spatio-temporal dependencies among features. them. In particular, we extend a popular dbscan density-based clus- tering algorithm to form an intuitive video graph. During training, we estimate a linear SVM classifier using the stan- dard Bag-of-words method. During classification, we apply Graph-Cut optimization to find the most frequent action la- bel in the constructed graph and assign this label to the test video sequence. The proposed approach achieves state- of-the-art performance with standard human action recog- nition benchmarks, namely KTH and UCF-sports datasets and competitive results for the Hollywood (HOHA) dataset.
Conference Name:
International Conference on Advanced Video and Signal Surveillance (AVSS 2015)
Digital Object Identifer (DOI):
10.1109/AVSS.2015.7301760
Publication Date:
25/08/2015
Conference Location:
Germany
Research Group:
Institution:
Dublin City University (DCU)
Open access repository:
Yes