Sira Ferradans

Post-doc at Data group, of the ENS Paris.

[home]


An analysis of visual adaptation and contrast perception for tone mapping


Sira Ferradans, Marcelo Bertalmío, Edoardo Provenzi, and Vicent Caselles.

Abstract.- Tone Mapping is the problem of compressing the range of a High Dynamic Range image so that it can be displayed in a Low Dynamic Range screen, without losing or introducing novel details: the final image should produce in the observer a sensation as close as possible to the perception produced by the real world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments, and in terms of this metric our method compares very well with the state of the art.

C code and pdf


Numerical experiments

Aydin et al. proposed a quantitative measure for tone mapping quality based on psychophysical experiments. Given an HDR reference and its LDR tone-mapping, this method first estimates the luminance values produced by the LDR image shown in a specific display. Then, it compares the perception a viewer would have of both scenes relying on psychophysical data. The output is an image of the same size as the original called distortion map where they use color coding to represent the contrast distortion error for each pixel: gray means no error, green represents loss of visible contrast, blue represents amplification of contrast (i.e. we can see in the LDR image detail we should not be able to see) and red represents contrast reversal (contrast visible both on the HDR and LDR images, but with different polarity.) The saturation of each color indicates the magnitude. From the distortion maps provided by the method of Aydin et al. we extracted the error value for each pixel and computed the following averages over our image set: total error percentage, reversal error percentage (red), loss of contrast error percentage (green), contrast amplification error percentage (blue). Note that the measure proposed by Aydin et al. only assigns one type of error to each pixel, the type of error which is predominant at that location. Here we present the results obtained for each image of our data set.