LUNDI 17 NOVEMBRE, 16H, SALLE S16 Shape from Depth Discontinuities and Parallax-Free Registration of Aerial Videos Gabriel Taubin http://mesh.brown.edu/taubin/ Brown University In this talk I will describe recent results on two different topics. We propose a new primal-dual framework for representation, capture, processing, and display of piecewise smooth surfaces, where the dual space is the space of oriented 3D lines, or rays, as opposed to the traditional dual space of planes. An image capture process detects points on a depth discontinuity sweep from a camera moving with respect to an object, or from a static camera and a moving object. A depth discontinuity sweep is a surface in dual space composed of the time-dependent family of depth discontinuity curves span as the camera pose describes a curved path in 3D space. Only part of this surface, which includes silhouettes, is visible and measurable from the camera. Locally convex points inside concavities can be estimated from the visible non-silhouette depth discontinuity points. Locally concave point laying at the bottom of concavities, which do not correspond to visible depth discontinuities, cannot be estimated, resulting in holes in the reconstructed surface. We describe various approaches to filling these holes, including a variational approach which produces watertight models. I will describe our first system for acquiring models of shape and appearance, which uses a multi-flash camera to capture depth discontinuities, and some work in progress. Aerial video registration is traditionally performed using 2-d transforms in the image space. For scenes with large 3-d relief, this approach causes parallax motions which may be detrimental to image processing and vision algorithms further down the pipeline. A novel, automatic, and online video registration system is proposed which renders the scene from a fixed viewpoint, eliminating motion parallax from the registered video. The 3-d scene is represented with a probabilistic volumetric model, and camera pose at each frame is estimated using an Extended Kalman Filter and a refinement procedure based on a popular visual servoing technique. A continuous formulation, which results in an efficient implementation with non-uniform volume sampling using an octree, allows us to process large and complex scenes with fine detail.