Adaptive Texture Space Shading for Stochastic Rendering
By Magnus Andersson1,2, Jon Hasselgren1, Robert Toth1, Tomas Akenine-Möller1,2
1Intel Corporation, 2Lund University
When rendering effects such as motion blur and defocus blur, shading can become very expensive if done in a naïve way, i.e. shading each visibility sample. To improve performance, previous work often decouple shading from visibility sampling using shader caching algorithms. We present a novel technique for reusing shading in a stochastic rasterizer. Shading is computed hierarchically and sparsely in an object-space texture, and by selecting an appropriate mipmap level for each triangle, we ensure that the shading rate is sufficiently high so that no noticeable blurring is introduced in the rendered image. Furthermore, with a two-pass algorithm, we separate shading from reuse and thus avoid GPU thread synchronization. Our method runs at real-time frame rates and is up to 3 faster than previous methods. This is an important step forward for stochastic rasterization in real time.
Read the preprint paper: A Compressed Depth Cache [PDF 11.4MB]
BibTex [ZIP 331B]
Citation: Magnus Andersson, Jon Hasselgren, Robert Toth, Tomas Akenine-Möller, Adaptive Texture Space Shading for Stochastic Rendering, Computer Graphics Forum (Proceedings of Eurographics 2013), vol. 33, no. 2, 10, 2014
Available onlinehttps://diglib.eg.org/EG/DL/CGF/volume33/issue2