Learning Actionness via Long-range Temporal Order Verification

People

Abstract

Current methods for action recognition typically rely on supervision provided by manual labeling. Such methods, however, do not scale well given the high burden of manual video annotation and a very large number of possible actions. The annotation is particularly difficult for temporal action localization where large parts of the video present no action, or background. To address these challenges, we here propose a self-supervised and generic method to isolate actions from their background. We build on the observation that actions often follow a particular temporal order and, hence, can be predicted by other actions in the same video. As consecutive actions might be separated by minutes, differently to prior work on the arrow of time, we here exploit long-range temporal relations in 10-20 minutes long videos. To this end, we propose a new model that learns actionness via a self-supervised proxy task of order verification. The model assigns high actionness scores to clips which order is easy to predict from other clips in the video. To obtain a powerful and action-agnostic model, we train it on the large-scale unlabeled HowTo100M dataset with highly diverse actions from instructional videos. We validate our method on the task of action localization and demonstrate consistent improvements when combined with other recent weakly-supervised methods.

Paper

D. Zhukov, J.-B. Alayrac, I. Laptev and J. Sivic
Learning Actionness via Long-range Temporal Order Verification
In Proceedings of the 16th European Conference on Computer Vision (ECCV), 2020
[pdf]

BibTeX

@InProceedings{zhukov20,
        author       = "Zhukov, D. and Alayrac, J.-B. and Laptev, I. and Sivic, J."
        title        = "Learning Actionness via Long-range Temporal Order Verification",
        booktitle    = "ECCV",
        year         = "2020",
        }

Presentation

Demonstration

Acknowledgements

This work was partially supported by the European Regional Development Fund under project IMPACT (reg. no. CZ.02.1.01/0.0/0.0/15 003/0000468), Louis Vuitton ENS Chair on Artificial Intelligence, the MSR-Inria joint lab, and the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute).