You are here

3DSAL: an efficient 3D-CNN architecture for video saliency prediction


Yasser Djilali, Dahou Abdelaziz, Mohamed Sayah, Kevin McGuinness, Noel O'Connor

Publication Type: 
Refereed Conference Meeting Proceeding
In this paper, we propose a novel 3D CNN architecture that enables us to train an effective video saliency prediction model. The model is designed to capture important motion information using multiple adjacent frames. Our model performs a cubic convolution on a set of consecutive frames to extract spatio-temporal features. This enables us to predict the saliency map for any given frame using past frames. We comprehensively investigate the performance of our model with respect to state-of-the-art video saliency models. Experimental results on three large-scale datasets, DHF1K, UCF-SPORTS and DAVIS, demonstrate the competitiveness of our approach.
Conference Name: 
15th International Conference on Computer Vision Theory and Applications
Digital Object Identifer (DOI): 
Publication Date: 
Conference Location: 
Research Group: 
Dublin City University (DCU)
Open access repository: 
Publication document: