You are here

On guiding video object segmentation


Diego Ortego, Kevin McGuinness, Juan C. SanMiguel, Eric Arazo Sanchez, Jose M. Martinez, Noel O’Connor

Publication Type: 
Refereed Conference Meeting Proceeding
This paper presents a novel approach for segmenting moving objects in unconstrained environments using guided convolutional neural networks. This guiding process relies on foreground masks from independent algorithms (i.e. state-of-the-art algorithms) to implement an attention mechanism that incorporates the spatial location of foreground and background to compute their separated representations. Our approach initially extracts two kinds of features for each frame using colour and optical flow information. Such features are combined following a multiplicative scheme to benefit from their complementarity. These unified colour and motion features are later processed to obtain the separated foreground and background representations. Then, both independent representations are concatenated and decoded to perform foreground segmentation. Experiments conducted on the challenging DAVIS 2016 dataset demonstrate that our guided representations not only outperform non-guided, but also recent and top-performing video object segmentation algorithms.
Conference Name: 
Content Based Multimedia Indexing (CBMI) 2019
Proceedings of Content Based Multimedia Indexing (CBMI) 2019
Digital Object Identifer (DOI): 
Publication Date: 
Conference Location: 
Research Group: 
Dublin City University (DCU)
Open access repository: