|
|
|
Abstract |
|
|
|
Despite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state-of-the-art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surroundaware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone-mapping, and improved accuracy in visual difference prediction.
|
|
|
|
|
|
@InProceedings{Yi_2021_EGSR,
author = {Shinyoung Yi and Daniel S. Jeon and Ana Serrano and
Se-Yoon Jeong and Hui-Yong Kim and Diego Gutierrez and Min H. Kim},
title = {Modeling Surround-aware Contrast Sensitivity},
booktitle = {Proc. Eurographics Symposium on Rendering (EGSR) 2021},
month = {June},
year = {2021}
}
|
|
|
|
We report that the published paper contains typos in the first row of Table 1 (constants for the full S-CSF model only), so we write the corrected one here: |
|
R model | a | p1 | p2 | q1 | q2 | q3 | σ0 | η | k |
| 0.07935 | -0.6363 | 0.2157 | 2246 | 0.65 | -15.56 | 0.0103 | 0.0148 | 10.1826 |
|
|
The PDF version in our website includes the error correction. |
|
|
|
|