Sunday, December 19, 2010

From Deferred to Inferred, part drie

The final chapter in this dramatic trilogy. We saw Deferred rendering having relation problems with his transparent cousins. Inferred Lighting suddenly made its appearance, trying to steal the hearts. But Ridge Forrester pointed out that the charming dr. Inferred has a dark side as well, and accused him of being a fake... Still, translucent Barbara has feelings for the Diffuse and Specular skills of this mysterious Inferred Renderer. How will this end?

Again, a shot with input textures for the Deferred / Inferred pipeline.


Right. As said I'm not really amused by the stippling and the detail loss that come with Inferred Rendering. But please don’t take my word on that. Judge for yourself, as it all depends on the scenery you had in mind. Yet I'm pleased with the separate pass for doing Diffuse and Specular lighting. For the 63th time, Lighting is one of the (technical) key ingredients for “good” graphics. Asides from having proper art resources of course. However, engines have so many effects these days, that it's hard to trace problems once the result isn’t quite what you expected. What you see on the screen is not just “albedo x sum(lights)”. We have:

- Specular lighting, the shiny gloss on metal, pottery, plastic, wet bricks or polished floors
- Reflections (cubeMaps, mirrors)
- Ambient light
- SSAO, DoF, Fog, noise
- Emissive textures
- And worse, HDR & Tonemapping messing around with the colors to bring it in a certain range
- And so on...

If the graphics suck, then what went wrong? Bad shaders? That's an easy answer, but in fact it's often a combination of overdone HDR, wrong reflection quantities, bad color contrast/saturation in the texture maps, or not-so-good light colors. The only problem is, it's hard to find the cause.

Obviously, with a separated Diffuse and Specular texture, it's easy to test if at least the basic lighting went properly.

See? Really useful. But maybe not a reason to change your rendering pipeline (again) though... Ok, then maybe I have a few other reasons that may convince you:

- Improved HDR Bloom (blur on bright spots)
- Diffuse blurring for special surfaces (human skin for example)
- Easy to enhance the light contrast / maximum intensity or limit the overall light result
- Can be used as input for other specific shaders

The HDR Bloom in my case is often too bright, and the overall screen is to dark because the Tone Mapper adjusted the eye on bright walls. Let me explain how it works (in my case):
1- Scene is rendered completely (including transparent crap and everything)
2- Average screen luminance is measured
3- Everything brighter than X, depending on the current eye adaption, will result
in a blur
4- Tone mapper scales the color range from the input texture (step 1) in 8-bit
screen range. Again depending on the current eye adaption level.

Say what? If you are looking outside, you can be blinded for a while since the sky and sun are much more intense than that dull light in your stinky office. Computers can fake this effect with HDR rendering. Back in the old days, a 8bit RGB color of 255;255;255 ("1.0" in shader terms), would mean "White". But what is white? Paper is white, but yet far less intense than the sun I guess. One trick to make a sun look brighter than a piece of paper, is to render a “bloom” / “blur” around the sun. But again, what exactly is bright? Depends how many lights are shining on a piece of surface, how bright these lights are, how much the material reflects, and eventually how much light the material produces itself (neon lights, TV screens, …).

The only problem with 8 bit buffers is that your value will stay clamped to: 1 + 1 = 1. ? Yes, because 8 bit can’t hold higher values than that. With High-Range buffers (16 bit or more), you won’t be limited anymore. High-Dynamic-Range lighting is just a way to make use of that advantage. You can sum many lights, or use bigger variances in the intensity values. Yet in the end the the results still have to be rendered on a 8-bit target; your monitor.

Scale the full-range colored scene into a lower-range target texture. The same happens with your eyes in reality. As you can’t see the full color spectrum, your eyes are adjusted to a certain level. In games that would be the average luminance of the scene you are seeing.


The problem with my HDR approach is that bright surfaces (such as paper or white wallpaper) quickly result in bright blurs and darkened scenes if there are a few lights shining on them. This is because that “average luminance” was simply an average of all the result-colors in the scene. So a bunch of white papers would quickly be considered as “very bright” when laying on a darker wooden table. But… did you ever get blinded by a paper? I was, but that's a whole different story.

The "Blur" / "Bloom" should only occur at highly emissive sources (lights), the sky, reflective surfaces (car metal, water), or extremely litten surfaces. Now that we have a diffuse and specular buffer as well, we can focus more on the specular (light reflected straight to your eye) quantities, instead of the color as a whole. The diffuse portion is ignored more by giving it a lower weight. That prevents blurs on white chalk walls or tax-papers. It also stabilizes the tone-mapper. When measuring the average luminance, I’ll ignore the specular light more. As specular lighting is very dependent on the eye position/direction, it can change every step you take, while diffuse light remains the same.

Abstract art by an upside down Chinese master? No, the input texture for the Bloom, using a brightpass filter.

Sure, it’s still as fake as Hulk Hogan defeating The Undertaker, but at least the overdone blooming and weird luminance peaks are reduced. I can’t show you proper final results yet, as I’m still struggling with the new pipeline. Not only the inferred approach was applied, had to fix bugs, clean up code. Also added a new method for storing shadowMaps, and a new framebuffer switching mechanisms (had a big performance drop suddenly, more about that another time). But ok, in the shot below the left wall and those pretty heads would be blurred in the previous pipeline. Now the blur is only applied at the tiled floor.

Other tricks you can do is a processing the light buffers before using them. Blur, contrast, saturate, maxOf, minOf, you name it. Do you really need that then? Uhm, maybe not. Though blurring diffuse can be interesting for special type of surfaces such as human skin. I'm not really into that yet, but techniques like sub-surface scattering / skin rendering seems to blur the diffuse light in some cases for a soft appearance. Makes sense. Take a magnifierglass and have a look at a girl. Plenty of bumps to do normalMapping, even through all that make-up. But still soft as silk, so you can't use standard normalMapping. It would make the head look like plastic or 300 year old stone. Well, I’m sure we’ll be looking into skin-rendering sooner or later.


All in all, there are probably other workarounds, but these 3 features convinced me. After all, without the stippling and DSF filtering stuff, this is only a small change in the rendering pipeline and the additional overhead isn’t that scary. And it becomes more attractive to have a try with the transparent filtering techniques from the Inferred Lighting paper… Nothing ventured, nothing gained.


Ow, I heard Santa might bring me a new videocard (after telling him my creditcard number).
Merry Christmas bastards!

No comments:

Post a Comment