You are here

VidCEP: Complex Event Processing Framework to Detect Spatiotemporal Patterns in Video Streams

Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
Video data is highly expressive and has traditionally been very difficult for a machine to interpret. Querying event patterns from video streams is challenging due to its unstructured representation. Middleware systems such as Complex Event Processing (CEP) mine patterns from data streams and send notifications to users in a timely fashion. Current CEP systems have inherent limitations to query video streams due to their unstructured data model and lack of expressive query language. In this work, we focus on a CEP framework where users can define high-level expressive queries over videos to detect a range of spatiotemporal event patterns. In this context, we propose- i) VidCEP, an in-memory, on the fly, near real-time complex event matching framework for video streams. The system uses a graphbased event representation for video streams which enables the detection of high-level semantic concepts from video using cascades of Deep Neural Network models, ii) a Video Event Query language (VEQL) to express high-level user queries for video streams in CEP, iii) a complex event matcher to detect spatiotemporal video event patterns by matching expressive user queries over video data. The proposed approach detects spatiotemporal video event patterns with an F-score ranging from 0.66 to 0.89. VidCEP maintains near real-time performance with an average throughput of 70 frames per second for 5 parallel videos with sub-second matching latency
Conference Name: 
IEEE International Conference on Big Data (IEEE Big Data)
Digital Object Identifer (DOI): 
10.1109/BigData47090.2019.9006018
Publication Date: 
24/02/2020
Conference Location: 
United States of America
Research Group: 
Institution: 
National University of Ireland, Galway (NUIG)
Open access repository: 
No