Results of Stationary Dynamic Texture Synthesis

This project addresses the problem of modeling stationary color dynamic textures with Gaussian processes. We detail two particular classes of such processes that are parameterized by a small number of compactly supported linear filters, so-called dynamical textons (\emph{dynTextons}). The first class extends previous works on the spot noise texture model to the dynamical setting. It directly estimates the dynTexton to fit a translation-invariant covariance from the exemplar. The second class is a specialization of the auto-regressive (AR) dynamic texture method to the setting of space and time stationary textures. This allows one to parameterize the process covariance using only a few linear filters. Numerical experiments on a database of stationary textures shows that the methods, despite their extreme simplicity, provide state of the art results to synthesize space stationary dynamical texture, as you can see in database presented in this page. This stationary dynamic texture database contains 27 different color video sequences, each of which is of size 64x64x100 for each. For more information go here.

To browse the synthesized results, please click on a sample.

boilingwater

ocean

clouds1

clouds

pond1

smoke1

fire_smoke

fire

fire1

fire2

fire3

fire4

fire5

snow1

steam

water_ball

water_wave1

water_wave2

fountain

waterfall1

waterfall2

waterfall3

waterfall4

waterfall5

waterfall_De

frog3

goldenlines

<- Back