Monday, December 27, 2010

Need for Speed

Right, how many kilograms of beef, potatoes, ice-cream, chocolate saus and bread did you eat last weekend? Enough to promise losing some weight in 2011? Ah Christmas... All those magical cozy lights, Wham! music, and not to forget: Home Alone, Critters and Gremlins 1..18. But the best memories are probably those of unwrapping Command & Conquer, Goldeneye (n64), Zelda OOT / Majora's Mask. Each year my little brother and I would nervously wait for out next game. Inspecting all the packages beneath the tree and knowing the exact dimensions / weight of a N64 game, we already knew which box to keep an eye on weeks before 24 December.

The time of getting has transformed into giving presents. Hence, I wouldn't even know what to ask anymore. The downside of getting older is also getting more spoiled. At least where I live. What do I need anyway? A working computer, a chair, a bike to go to work. Clothes maybe... There is more joy in buying Shrek for our daughter, or giving a Blu-Ray player to grandpa.

However... I realized my videocard was pretty old again. Bought it end 2007, so that
is ~25 in dogyears, and 2.435 BC in hardware-years. In other words, extremely old in
hardware-land, where videocards older as fast as they render pixels. So, after donating Santa some money, a shiny box with a EVGA GeForce 4700 GTS came in. And damn, it even worked right after replacing it with the older card. Our family has a long history of fooling around with computer parts. Dad never bought a complete (working) system. 4 MB RAM here, a 60 Hz processor there. A 0Kb modem elsewhere, etcetera. And of course, it NEVER worked. Had to travel the entire country with dad to get computer parts in the summer of 1994, waiting weeks and weeks before I could finally play Doom2 (with PC-speaker).

Was it worth the money? Hmmm, I can imagine there are more useful things in life, but:

- Tower22 on GeForce 8800 GTS (640 MB) : ~30 FPS
- Tower22 on EVGA GeForce 4700 GTS : ~56 FPS

Almost doubled, pulled the T22 Engine out of the mud. But don't worry, we'll bring that card down to its knees again in no-time, begging for mercy. More light, realtime volumetric lightshafts/fog and updated Ambient lighting are on the menu.

Work in progress: improved volumetric light. Not blurring a bright spot, but raytracing through space to see "how much particles" were lit. The lower-left corner shows the lightshaft-buffer.

FBO Sandwich
Talking about speed. As a programmer, I'm sure you wondered several times
"How the hell can Crysis/Halflife/... run that fast on my machine, while my game runs like a crippled grandma?"
Ifso, here a last programmers advise for 2010.

After the transformation into the Inferred rendering pipeline we discussed earlier was completed, the speed dropped from ~30 to ~22 FPS (on the old card). Inferred Rendering has slight more overhead, but that drop was ridiculous. Where did we go wrong?! Bad shaders? Maybe the new shadowMapping storage technique (I'll discuss that another time)?

Then an old fiend flashed by; Captain Framebuffer. In OpenGL terms, a FBO is a collection of targetbuffers you can render on. Well, with all those (background) techniques, we change that FBO plenty of times. But as I discovered years ago, when playing around with shadowMaps for the first time, mistakes are easily made. A wrong switch or MRT setting, and your engine neck snaps like a lucifer stick. Here a few important advises:

Try to prevent switching resolutions
Switching targets always takes time, but especially when hopping from one resolution to another. For example, a pipeline might do this:

- render to 4 1024 x 768 textures for Deferred input
- render to a 256 x 256 texture for a light shadowMap
- render to a 512 x 512 SSAO buffer
- render to another 1024 x 768 texture for depth
- render to a 512 x 512 DOF input buffer

Five switches. With all those techniques, switching is enevitable. But at least you can order things better:

- render to 4 1024 x 768 textures for Deferred input
- render to another 1024 x 768 texture for depth
- render to a 512 x 512 SSAO buffer
- render to a 512 x 512 DOF input buffer
- render to a 256 x 256 texture for a light shadowMap

See that? Only 3 switches instead of 5.


Make a FBO for each resolution
Not 100% sure about this, but people say it's best to make a FBO for each possible resolution, instead of only changing the rendertarget for a single FBO. In the example above, we would need 3 FBO's; 1024 x 768 , 512 x 512 and 256 x 256. Each one can have it's own depth buffer.

Atlas renderTargets
Use bigger atlas textures to perform multiple passes in a single buffer texture. When having to blur or downscale, you quickly end-up with a large number of different resolutions. For example, the HDR technique requires to downsample the average luminance of the screen contents. At first, my engine would do this:
1.- Render luminance values to a 128 x 128 texture
2.- Downscale step 1 texture to 64 x 64
3.- Downscale step 2 texture to 16 x 16
4.- Downscale step 3 texture to 3 x 3

4 switches. But you could also perform everything in a single, larger buffer.

Only 1 switch, ow hurray! I also render all shadowMaps in a single large atlas texture by the way, but I'll give details about that another time.


After simply re-ordering the passes, to reduce the amount of FBO switches, the framerate was restored.

Don't worry. New content will come. In 2011!

Sunday, December 19, 2010

From Deferred to Inferred, part drie

The final chapter in this dramatic trilogy. We saw Deferred rendering having relation problems with his transparent cousins. Inferred Lighting suddenly made its appearance, trying to steal the hearts. But Ridge Forrester pointed out that the charming dr. Inferred has a dark side as well, and accused him of being a fake... Still, translucent Barbara has feelings for the Diffuse and Specular skills of this mysterious Inferred Renderer. How will this end?

Again, a shot with input textures for the Deferred / Inferred pipeline.


Right. As said I'm not really amused by the stippling and the detail loss that come with Inferred Rendering. But please don’t take my word on that. Judge for yourself, as it all depends on the scenery you had in mind. Yet I'm pleased with the separate pass for doing Diffuse and Specular lighting. For the 63th time, Lighting is one of the (technical) key ingredients for “good” graphics. Asides from having proper art resources of course. However, engines have so many effects these days, that it's hard to trace problems once the result isn’t quite what you expected. What you see on the screen is not just “albedo x sum(lights)”. We have:

- Specular lighting, the shiny gloss on metal, pottery, plastic, wet bricks or polished floors
- Reflections (cubeMaps, mirrors)
- Ambient light
- SSAO, DoF, Fog, noise
- Emissive textures
- And worse, HDR & Tonemapping messing around with the colors to bring it in a certain range
- And so on...

If the graphics suck, then what went wrong? Bad shaders? That's an easy answer, but in fact it's often a combination of overdone HDR, wrong reflection quantities, bad color contrast/saturation in the texture maps, or not-so-good light colors. The only problem is, it's hard to find the cause.

Obviously, with a separated Diffuse and Specular texture, it's easy to test if at least the basic lighting went properly.

See? Really useful. But maybe not a reason to change your rendering pipeline (again) though... Ok, then maybe I have a few other reasons that may convince you:

- Improved HDR Bloom (blur on bright spots)
- Diffuse blurring for special surfaces (human skin for example)
- Easy to enhance the light contrast / maximum intensity or limit the overall light result
- Can be used as input for other specific shaders

The HDR Bloom in my case is often too bright, and the overall screen is to dark because the Tone Mapper adjusted the eye on bright walls. Let me explain how it works (in my case):
1- Scene is rendered completely (including transparent crap and everything)
2- Average screen luminance is measured
3- Everything brighter than X, depending on the current eye adaption, will result
in a blur
4- Tone mapper scales the color range from the input texture (step 1) in 8-bit
screen range. Again depending on the current eye adaption level.

Say what? If you are looking outside, you can be blinded for a while since the sky and sun are much more intense than that dull light in your stinky office. Computers can fake this effect with HDR rendering. Back in the old days, a 8bit RGB color of 255;255;255 ("1.0" in shader terms), would mean "White". But what is white? Paper is white, but yet far less intense than the sun I guess. One trick to make a sun look brighter than a piece of paper, is to render a “bloom” / “blur” around the sun. But again, what exactly is bright? Depends how many lights are shining on a piece of surface, how bright these lights are, how much the material reflects, and eventually how much light the material produces itself (neon lights, TV screens, …).

The only problem with 8 bit buffers is that your value will stay clamped to: 1 + 1 = 1. ? Yes, because 8 bit can’t hold higher values than that. With High-Range buffers (16 bit or more), you won’t be limited anymore. High-Dynamic-Range lighting is just a way to make use of that advantage. You can sum many lights, or use bigger variances in the intensity values. Yet in the end the the results still have to be rendered on a 8-bit target; your monitor.

Scale the full-range colored scene into a lower-range target texture. The same happens with your eyes in reality. As you can’t see the full color spectrum, your eyes are adjusted to a certain level. In games that would be the average luminance of the scene you are seeing.


The problem with my HDR approach is that bright surfaces (such as paper or white wallpaper) quickly result in bright blurs and darkened scenes if there are a few lights shining on them. This is because that “average luminance” was simply an average of all the result-colors in the scene. So a bunch of white papers would quickly be considered as “very bright” when laying on a darker wooden table. But… did you ever get blinded by a paper? I was, but that's a whole different story.

The "Blur" / "Bloom" should only occur at highly emissive sources (lights), the sky, reflective surfaces (car metal, water), or extremely litten surfaces. Now that we have a diffuse and specular buffer as well, we can focus more on the specular (light reflected straight to your eye) quantities, instead of the color as a whole. The diffuse portion is ignored more by giving it a lower weight. That prevents blurs on white chalk walls or tax-papers. It also stabilizes the tone-mapper. When measuring the average luminance, I’ll ignore the specular light more. As specular lighting is very dependent on the eye position/direction, it can change every step you take, while diffuse light remains the same.

Abstract art by an upside down Chinese master? No, the input texture for the Bloom, using a brightpass filter.

Sure, it’s still as fake as Hulk Hogan defeating The Undertaker, but at least the overdone blooming and weird luminance peaks are reduced. I can’t show you proper final results yet, as I’m still struggling with the new pipeline. Not only the inferred approach was applied, had to fix bugs, clean up code. Also added a new method for storing shadowMaps, and a new framebuffer switching mechanisms (had a big performance drop suddenly, more about that another time). But ok, in the shot below the left wall and those pretty heads would be blurred in the previous pipeline. Now the blur is only applied at the tiled floor.

Other tricks you can do is a processing the light buffers before using them. Blur, contrast, saturate, maxOf, minOf, you name it. Do you really need that then? Uhm, maybe not. Though blurring diffuse can be interesting for special type of surfaces such as human skin. I'm not really into that yet, but techniques like sub-surface scattering / skin rendering seems to blur the diffuse light in some cases for a soft appearance. Makes sense. Take a magnifierglass and have a look at a girl. Plenty of bumps to do normalMapping, even through all that make-up. But still soft as silk, so you can't use standard normalMapping. It would make the head look like plastic or 300 year old stone. Well, I’m sure we’ll be looking into skin-rendering sooner or later.


All in all, there are probably other workarounds, but these 3 features convinced me. After all, without the stippling and DSF filtering stuff, this is only a small change in the rendering pipeline and the additional overhead isn’t that scary. And it becomes more attractive to have a try with the transparent filtering techniques from the Inferred Lighting paper… Nothing ventured, nothing gained.


Ow, I heard Santa might bring me a new videocard (after telling him my creditcard number).
Merry Christmas bastards!

Monday, December 13, 2010

I love it when a plan comes together

Where is part three of the Deferred/Inferred story? Well, last weekend I had beer-drinking duty on a little vacation with friends :p So, next week hopefully. Adjusting the rendering pipeline correctly (including some other aspects) is quite a struggle.

As for the game & next demo-movie, there are some interesting developments going on that I'd like to share. The chaos in the mailbox with people offering their help was over, but last week there were about 10 replies again. Varying from students who like to build their skills with this project, artists, web-designers, and also interesting; Pascal community members who asked if this project could become "Open Source"... With limited time and hands full on programming and keeping the three other team members busy I have to pick carefully though. Don't carry more than you can hold!


As said before, replying to all these mails is difficult for me. Not that I don't like answering mails, but I just don't want to sound like a jerk when refusing someone’s help. As the whole thing is based on charity basically… "Kijk een gegeven paard nooit in de bek"

In the ideal situation, one or two experts reply right at once for either sound, modeling, mapping or concept drawing. Two days later a team was formed… Right. In reality a mixture of qualities, styles and experience levels drop a mail once in a while, and you have to pick really carefully. Don't rush (difficult!!!), and don't put four men on the same task. If I would have let everyone in so far, we would already had about 5 character artists/modelers for example. While one or two is more than enough. Hey, we’re not making World of Warcraft here! I'm not an expert, but I guess mixing different styles and ideas is not a good idea anyway. Steering a ship with 8 captains at the same time...

Work from Jesse. Nothing to do with Tower22, but nice to show neverthless. Uhm... I still don't have new game-pictures anyway

Nevertheless, I was pleased with the offer from a concept-artist who mainly showed environment-art. Exactly one of the missing keys in our team setting so far. Hey, that whole skyscraper has to get filled right? From macro level (global overviews like the Zelda or Metroid maps) from micro (corridor atmosphere, bizarre environment ideas). Say hello to Jesse Maccabe as our Environment artist:
http://www.jessemaccabe.com/

Now that I'm thinking about it, I guess it's fun to let the modelers/artists/writers and sound composers write "how they do stuff" here on this Blog once a while. A little view in the Tower22 kitchen, asides from spicey programming with Gordon Fuck! Ramsey.

And… we’re getting help from another person on the main character of the game. Worked on a couple of games, including F.E.A.R. 3. Currently teaches at the Full Sail University. Hands up for Robert Brown:
http://www.robertakbrown.com/
Right now we are brainstorming how the player character should look like. No, I’m not telling anything yet, but that rusty robot has to be replaced obviously. Started to malfunction anyway. Well, with six people in total now, it’s time to shut the cry for help, as we are pretty much complete. For now. In the future we probably need a map & asset modeler, but… let’s first make a second demo movie ok? All in all, I can’t complain! Seriously, I really didn’t expect to get so many reactions, and certainly not from people with this kind of experience!

Robert's golden handshake

About Open Source… This game is made with ancient alien techniques so far. Delphi 7, OpenGL1.X and Windows XP Paint of course. Anyway, the Pascal(programming language) got interested because of that of course (thank you!). All those C++ boys keep telling that Delphi sucks, here, eat that sucker  But seriously, some were interested in the code. How to reply on that? Since I'm using quite a few free tools, and learned most of my skills from free tutorials/demo's/projects, it would be a little bit selfish to keep the cake for myself right? Yet I refused for now. Why?
- No time (to do it properly)
- This Blog gives some learnful information, hopefully. Using that to repay my debts :)
- I worked hard on it for many years. Sorry, but giving it all away right away...
- What if... this project would actually get a chance to become something more serious?

Basically I have no problems with Open Source. In fact, if T22 would be released tomorrow, you are free to look in the code in the next week. Kinda sympathized id Software for opening their Quake2 code (years after though) for example. But the main focus is to have fun and to create a game/movies for now. I simply don't have time for side projects, teaching students (I would like to though) or very detailed tutorials on this Blog. Sorry!

And wouldn't it be stupid to give it all away if this project might get a "commercial chance" in the future (after making more movies)? I'm not much of a materialistic guy (give me a chair and a computer, that's enough), but if making a living with doing what you like most is something everyone likes. I can't look in the future, but watching your steps is always wise.


Last but not least, we have a second movie idea. Quite a lot more complicated than the first one. Not in size, but it requires more interaction, showing some actual action-puzzling elements the game should have. And a far more bizarre creature than the Meathook guy… Since there are artists now, I'm hoping to post a couple of teasers in the next few months!

Sunday, December 5, 2010

From Deferred to Inferred, part deux

Rest in peace Frank Drebin!

Bleh, still no new pictures to post. Believe it or not, but our little girl tought it was an excellent idea to throw black ink all over the keyboard. So, my dev-machine is out of order until I have a working keyboard again, and I was too lazy to copy the whole project to my laptop. Pics or no pics, let’s continue our Deferred / Inferred rendering story ok?

Stop smiling you evil dwarf!

One week ago I explained Deferred Rendering / Lighting. To end with a couple of issues. There is a solution for everything in this world, but translucent rendering in a Deferred pipeline… With translucent stuff I mean glass, fences, Doom2 monster sprites, tree leaves, grass, and table cloths with a complex pattern, stitched by grandma.

Allow me to explain the problem. In a Deferred Rendering solution, each pixel from the (screenview) buffers contains information about one, ONE, pixel you can see. Info such as position, color, and its facing direction(normal). But what happens if you have 2 layers behind each other? Let’s say you are looking through a gelatin pudding… The gelatin has to be litten, but also the object behind it. But due the very limited storage space per pixel (16 scalars in 4 texture buffers), we can only store info for one piece of surface (pixel). And no, blending is not an option. Colors can be blended, but not positions or normals.

Dude, then simply create two sets of buffers! One of the opaque surfaces, another set for the transparent portion! Hmm, not that simple. Would work if there are exactly two layers behind each other, but in complex scenes such as your typical Crysis jungle, it could just as well be 10 layers. Think about grass billboards. So… can we throw away Deferred Rendering already?

The common solution is to render all the opaque stuff first, then to switch over to traditional “Forward Rendering” to render the remaining transparent geometry. The goods news is that the transparent part isn’t that much usually, as grass, foliage or fences are merely simple quad shapes. The bad news is that you have to implement two different types of rendering, making a messy whole. Plus you either have to sort out geometry again, and/or suffer lighting quality on the transparent parts. Multi-pass rendering on transparent surfaces can be tricky as well, as the type of blending can differ. Some use additive blending, others multiply or perform a simple alpha-test only. My engine “fixes” the problem by activating the most important lights per sector, and then render the transparent geometry in a single pass with all lights applied at once.

Inferred Rendering to the resque!? … The transparency issue was one of the motivations to make an adjusted variant called Inferred Rendering. But does it fix the problem? In my opinion; not really unfortunately. But I still have to try it out further (and I need working keyboard  ). Because it probably depends on what you are trying to do. Anyhow, it has some other interesting features though. But first, let's compare the pipelines:

For extended info about DSF and such, see the links at the bottom
The main differences are the separate lighting pass, and rendering the transparent surfaces into the “info buffers” by stippling them between the opaque pixels. That means there is actually less info available for each (resulting in a somewhat lower resolution unless you render on up-scaled buffers). The DSF edge filter technique smoothes the edges and fills the gaps again though. But yet, all forms of interpolation means quality loss in the end.

The good thing is that transparent geometry can be done in exactly the same way. No different shaders, no light sorting crap, and potentially a lot faster when having many lights + many transparent surfaces. Another small bonus for somewhat limited or older hardware is that we can possibly do with one less info buffer in the first pass, as the albedo color can be rendered later on. Don’t be fooled though, the lighting and DSF passes still require additional energy and extra buffers. Last but not least, the edge correction gives you some sort of Anti-Aliasing, which means less pixilated edges. By nature Deferred Rendering doesn’t have AA, another nasty little issue.

But it still doesn't really work when having, let's say, 10 grass billboards behind each other. As you can guess, that buffer still has a limited set of pixels. Depending on your stipple pattern, you could make 2 or 4 layers for the transparent geometry. Then sort out all transparent entities and tell which layer (stipple pattern) to look at when rendering them. YES, you need to perform Z-sorting to do this, but in case you have many transparent surfaces, you should be doing that anyway. But having 2, 4, or 6 layers for that matter, is still not much. Either you have to skip surfaces (which ones?), or accept the rendering bugs. Plus as mentioned before, you will miss small details (problematic for detail normalMapping) as pixels got offered when sharing the same buffer for multiple layers.

Why bothering then? Well…

- Unless you are rendering jungles or glass villa’s, how big is the chance you have more than 4 transparent pixels behind each other? Particles BTW can still use a simplified lighting method in a pass afterwards, if they need lighting at all.
- Having a separated light-pass got my attention.

Inferred Rendering produces one or two textures (depending if you want colors or intensity only for specular); Diffuse Light & Specular Light. The good thing is that these buffers do NOT contain dozens of other tricks such as emissive light, reflections, ambient or the surface material colors (albedo texture). That allows a couple of useful tricks, including improving HDR Bloom and debugging your lights. But boys and girls, that is for next week. Either you plan to use Inferred Rendering or not, this not-too-difficult and not-too-long paper is a comfortable read:
Inferred Lighting paper by Kircher & Lawrance
And some more details + DEMO/SOURCE by Matt Pettineo:
Dangerzone


And if you wondered why there was a pot of black ink on the computer desk? Well, I had to draw the new Dutch prime minister, Mark Rutte. As a birthday present for my little brother. Not that he is a Markie-fan in particular, but since I gave him a poster of our prime minster Balkenende 3 years ago as well... ;)