A flexible model for training action localization with varying levels of supervision

People

*Both authors contributed equally.

Abstract

Spatio-temporal action detection in videos is typically addressed in a fully-supervised setup with manual annotation of training videos required at every frame. Since such annotation is extremely tedious and prohibits scalability, there is a clear need to minimize the amount of manual supervision. In this work we propose a unifying framework that can handle and combine varying types of less demanding weak supervision. Our model is based on discriminative clustering and integrates different types of supervision as constraints on the optimization. We investigate applications of such a model to training setups with alternative supervisory signals ranging from video-level class labels over temporal points or sparse action bounding boxes to the full per-frame annotation of action bounding boxes. Experiments on the challenging UCF101-24 and DALY datasets demonstrate competitive performance of our method at a fraction of supervision used by previous methods. The flexibility of our model enables joint learning from data with different levels of annotation. Experimental results demonstrate a significant gain by adding a few fully supervised examples to otherwise weakly labeled videos.

Paper

[arXiv]

BibTeX

@InProceedings{actoraction18,
         author  = {Ch\'eron, Guilhem and Alayrac, Jean-Baptiste and Laptev, Ivan and Schmid, Cordelia},
         title   = {A Flexible Model for Training Action Localization with Varying Levels of Supervision},
         booktitle = {Neural Information Processing Systems (NeurIPS)},
         year    = {2018}
         }

Code

[GitHub]

Acknowledgements

This work was supported in part by ERC grants ACTIVIA and ALLEGRO, the MSR-Inria joint lab, the Louis Vuitton ENS Chair on Artificial Intelligence, an Amazon academic research award, and an Intel gift.