Reconnaissance d’objets et vision artificielle 2021/2022
Object recognition and computer vision 2021/2022
Jean Ponce, Ivan Laptev, Cordelia Schmid and Josef Sivic
Lecture time: Tuesday 16:15-19:15
Lecture room: Salle Dussane, ENS Ulm, 45 rue d’Ulm, Paris (except Nov 16 and 23)
Teaching assistants: Robin Strudel and Yann Labbé
Class Moodle: enroll in the course, get news, submit assignments, access discussion forum.
Python tutorial materials: Install anaconda, jupyter, run a notebook in colab.
Automated object recognition -- and more generally scene analysis -- from photographs and videos is the grand challenge of computer vision. This course presents the image, object, and scene models, as well as the methods and algorithms, used today to address this challenge.
There will be three programming assignments representing 50% (10% + 20% + 20%) of the grade. The supporting materials for the programming assignments and final projects will be in Python and make use of Jupyter notebooks. For additional technical instructions on the assignments please follow this link.
The final project will represent 50% of the grade.
You can discuss the assignments and final projects with other students in the class. Discussions are encouraged and are an essential component of the academic environment. However, each student has to work out their assignment alone (including any coding, experiments or derivations) and submit their own report. For the final project, you may work alone or in a group of maximum of 2 people. If working in a group, we expect a more substantial project, and an equal contribution from each student in the group. The final project report needs to explicitly specify the contribution of each student. Both students are expected to present the project at the oral presentation and contribute equally to writing the report. The assignments and final projects will be checked to contain original material. Any uncredited reuse of material (text, code, results) will be considered as plagiarism and will result in zero points for the assignment / final project. If a plagiarism is detected, the student will be reported to MVA.
Computer vision and machine learning talks
You are welcome to attend seminars in the Willow group. Please see the current seminar schedule. Typically, these are one hour research talks given by visiting speakers. The talks are at 2 Rue Simone IFF. When you enter the building, tell the receptionist you are going for a seminar.
Topic and reading materials.
Introduction; Class logistics, assignments, final projects
Instance-level recognition I. - Local invariant features (C. Schmid);
Mikolajczyk & Schmid, Scale and affine invariant interest point detectors, IJCV 2004; D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 2004;
Camera geometry (J. Ponce)
Instance-level recognition II. - Correspondence, image matching (I. Laptev);
History: J. Mundy - Object recognition in the geometric era: A retrospective.; Camera geometry: Forsyth&Ponce Ch.1-2. Hartley&Zisserman - Ch.6
Instance-level recognition III. - Efficient visual search (J. Sivic)
Muja & Lowe, Fast approx. nearest neighbors with automatic algorithm configuration, VISAPP'09; Sivic & Zisserman, Video Google: Efficient visual search of videos (chapter from this book), Philbin et al., Object retrieval with large vocabularies and fast spatial matching, CVPR'07; Jegou et al., Improving bag-of-features for large scale image search, IJCV 2010; Jegou et al., Aggregating local image descriptors into compact codes, PAMI 2011; Iscen et al., Efficient Diffusion on Region Manifolds, CVPR 2017; Arandjelovic et al., NetVLAD: CNN architecture for weakly-supervised place recognition, PAMI 2018.
Supervised learning and deep learning; optimization and regularization for neural networks (A. Joulin)
1. Python examples
2. For more details on neural networks you can watch the video lectures by Hugo Larochelle. The website also includes links to useful reading materials such as “Practical Recommendations for Gradient-Based Training of Deep Architectures” by Y. Bengio.
3. The book on deep learning by Y. Bengio
Convolutional neural networks for visual recognition I. (3hrs, G. Varol)
Y. LeCun et al., Gradient-based learning applied to document recognition, Proc. of the IEEE 86(11): 2278–2324, 1998; M.D. Zeiler, R. Fergus, Visualizing and Understanding Convolutional Networks, ECCV 2014; M. Oquab et al., Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks, CVPR 2014.
K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, 2014; K. He et al., Deep Residual Learning for Image Recognition, CVPR 2016.
Basics of CNNs by A. Vedaldi:
Convolutional neural networks for visual recognition II. (3hrs, I. Laptev)
Dalal and Triggs, Histograms of oriented gradients for human detection, CVPR 2005; Felzenszwalb et al., A Discriminatively Trained, Multiscale, Deformable Part Model, CVPR’08; Pascal VOC Challenge; Girshick et al., Rich feature hierarchies for accurate object detection and semantic segmentation, CVPR 2014; Girshick, Fast R-CNN, CVPR 2015; Ren et al., Faster R-CNN: Towards real-time object detection with region proposal networks, NIPS 2015. Redmon et al., You only look once: Unified, real-time object detection, CVPR 2016; Zhou et al., Objects as points, 2019; Long et al., Fully convolutional networks for semantic segmentation, CVPR 2015; Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, PAMI 2017; He et al., Mask R-CNN, ICCV 2017.
Sparse coding and dictionary learning for image analysis (3hrs, J. Ponce)
Materials: Bach, Mairal, Ponce, Sapiro, Tutorial on sparse coding and dictionary learning for image analysis, at CVPR'10.
Structured models for visual recognition; Weakly-supervised learning (3hrs, I. Laptev)
Yang and Ramanan, Articulated Human Detection with Flexible Mixtures-of-Parts, PAMI 2013; Toshev and Szegedy, DeepPose: Human Pose Estimation via Deep Neural Networks, CVPR 2014; Wei et al., Convolutional Pose Machines, CVPR 2016; Cao et al., Realtime multi-person 2D pose estimation using part affinity fields, CVPR 2017; Newell et al., Stacked Hourglass Networks for Human Pose Estimation, ECCV 2016; Oquab et al., Is object localization for free? - Weakly-supervised learning with convolutional neural networks, CVPR 2015;. Alayrac et al., Unsupervised learning from narrated instruction videos, CVPR 2016; Varol et al., Learning from Synthetic Humans, CVPR 2017; Hasson et al., Learning joint reconstruction of hands and manipulated objects, CVPR 2019; Miech et al., End-to-End Learning of Visual Representations from Uncurated Instructional Videos, CVPR 2020.
Human action recognition (3hrs, C. Schmid)
Brox and Malik, Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation, PAMI 2011; Weinzaepfel et al. Deepflow: Large displacement optical flow with deep matching, CVPR 2013; Laptev et al., Learning realistic human actions from movies, CVPR 2008; Wang et al., Dense trajectories and motion boundary descriptors for action recognition, CVPR 2011; Simonyan and Zisserman, Two-stream convolutional networks for action recognition in videos, NIPS 2014; Tran et al. Learning spatiotemporal features with 3D convolutional networks, ICCV 2015.
Deep Learning and 3D data (3hrs, M. Aubry)
1. Training with synthetic data: Tobin et al. Domain randomization for transferring deep neural networks from simulation to the real world IROS 2017; Torralba and Efros Unbiased look at dataset bias CVPR 2011; Ganin et al. Domain-adversarial training of neural networks JMLR 2016; Li et al.DeepIM: Deep iterative matching for 6d pose estimation, ECCV 2018
2. 3D analysis: Qi et al., Volumetric and multi-view cnns for object classification on 3d data, CVPR 2016; Qi et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, CVPR 2017; Groueix et al. 3D-CODED: 3D correspondences by deep deformation, ECCV 2018
3. 3D generation: Fan et al. A point set generation network for 3d object reconstruction from a single image, CVPR 2017, Groueix et al. AtlasNet: A papier-mâché approach to learning 3d surface generation, CVPR. 2018; Park et al. Deepsdf: Learning continuous signed distance functions for shape representation, CVPR 2019, Midenhall et al. , Nerf: Representing scenes as neural radiance fields for view synthesis, ECCV 2020; Kendal et al. End-to-end learning of geometry and context for deep stereo regression
4. 3D reconstruction: Zbontar et al. Computing the stereo matching cost with a convolutional neural network. CVPR 2015; Huang et al. DeepMVS: Learning multi-view stereopsis, CVPR2018; Yariv et al. Volume rendering of neural implicit surfaces, NeurIPS 2021
Learning visual representations for robotics (1.5hrs I. Laptev)
Recurrent neural networks (RNNs); Generative adversarial networks (GANs) (1.5hrs A. Joulin)
Final project presentations and evaluation (I. Laptev, J. Sivic, G. Varol)
Jan 17: 10:30-12:00; 13:00-16:00
Jan 18: 10:30-12:00; 13:00-16:00
The presentations will be virtual. Links will be provided.
Final project reports due on 28/01
D.A. Forsyth and J. Ponce, "Computer Vision: A Modern Approach", Prentice-Hall, 2nd edition, 2011
J. Ponce, M. Hebert, C. Schmid and A. Zisserman "Toward Category-Level Object Recognition", Lecture Notes in Computer Science 4170, Springer-Verlag, 2007
O. Faugeras, Q.T. Luong, and T. Papadopoulo, "Geometry of Multiple Images", MIT Press, 2001.
R. Hartley and A. Zisserman, "Multiple View Geometry in Computer Vision", Cambridge University Press, 2004.
J. Koenderink, "Solid Shape", MIT Press, 1990
R. Szeliski, "Computer Vision: Algorithms and Applications", 2010. Online book.