Static and Dynamic Texture Mixing
Sira Ferradans, Gui-Song Xia, Gabriel Peyré, and Jean-Franćois Aujol.
This paper tackles static and dynamic texture mixing by combining the statistical properties of an input set of images or videos. We focus on spot noise textures that follow a stationary and Gaussian model which can be learned from the given exemplars. From here, we define, using optimal transport, the distance between texture models, derive the geodesic path, and define the barycenter between several texture models. These derivations are useful because they allow the user to navigate inside the set of texture models, interpolating a new texture model at each element of the set. From these new interpolated models, new textures can be synthesized of arbitrary size in space and time. Numerical results obtained from a library of exemplars show the ability of our method to generate new complex realistic static and dynamic textures.
Given three input textures, f, f, f, and the path defined along the triangle by the red numbers in increasing order, we generated the Gaussian models associated to each point. Note how, as we approach an input model, those in number 0, 3, 7, the features of it tend to predominate in the synthesized texture.