Monday, December 27, 2010

Need for Speed

Right, how many kilograms of beef, potatoes, ice-cream, chocolate saus and bread did you eat last weekend? Enough to promise losing some weight in 2011? Ah Christmas... All those magical cozy lights, Wham! music, and not to forget: Home Alone, Critters and Gremlins 1..18. But the best memories are probably those of unwrapping Command & Conquer, Goldeneye (n64), Zelda OOT / Majora's Mask. Each year my little brother and I would nervously wait for out next game. Inspecting all the packages beneath the tree and knowing the exact dimensions / weight of a N64 game, we already knew which box to keep an eye on weeks before 24 December.

The time of getting has transformed into giving presents. Hence, I wouldn't even know what to ask anymore. The downside of getting older is also getting more spoiled. At least where I live. What do I need anyway? A working computer, a chair, a bike to go to work. Clothes maybe... There is more joy in buying Shrek for our daughter, or giving a Blu-Ray player to grandpa.

However... I realized my videocard was pretty old again. Bought it end 2007, so that
is ~25 in dogyears, and 2.435 BC in hardware-years. In other words, extremely old in
hardware-land, where videocards older as fast as they render pixels. So, after donating Santa some money, a shiny box with a EVGA GeForce 4700 GTS came in. And damn, it even worked right after replacing it with the older card. Our family has a long history of fooling around with computer parts. Dad never bought a complete (working) system. 4 MB RAM here, a 60 Hz processor there. A 0Kb modem elsewhere, etcetera. And of course, it NEVER worked. Had to travel the entire country with dad to get computer parts in the summer of 1994, waiting weeks and weeks before I could finally play Doom2 (with PC-speaker).

Was it worth the money? Hmmm, I can imagine there are more useful things in life, but:

- Tower22 on GeForce 8800 GTS (640 MB) : ~30 FPS
- Tower22 on EVGA GeForce 4700 GTS : ~56 FPS

Almost doubled, pulled the T22 Engine out of the mud. But don't worry, we'll bring that card down to its knees again in no-time, begging for mercy. More light, realtime volumetric lightshafts/fog and updated Ambient lighting are on the menu.

Work in progress: improved volumetric light. Not blurring a bright spot, but raytracing through space to see "how much particles" were lit. The lower-left corner shows the lightshaft-buffer.

FBO Sandwich
Talking about speed. As a programmer, I'm sure you wondered several times
"How the hell can Crysis/Halflife/... run that fast on my machine, while my game runs like a crippled grandma?"
Ifso, here a last programmers advise for 2010.

After the transformation into the Inferred rendering pipeline we discussed earlier was completed, the speed dropped from ~30 to ~22 FPS (on the old card). Inferred Rendering has slight more overhead, but that drop was ridiculous. Where did we go wrong?! Bad shaders? Maybe the new shadowMapping storage technique (I'll discuss that another time)?

Then an old fiend flashed by; Captain Framebuffer. In OpenGL terms, a FBO is a collection of targetbuffers you can render on. Well, with all those (background) techniques, we change that FBO plenty of times. But as I discovered years ago, when playing around with shadowMaps for the first time, mistakes are easily made. A wrong switch or MRT setting, and your engine neck snaps like a lucifer stick. Here a few important advises:

Try to prevent switching resolutions
Switching targets always takes time, but especially when hopping from one resolution to another. For example, a pipeline might do this:

- render to 4 1024 x 768 textures for Deferred input
- render to a 256 x 256 texture for a light shadowMap
- render to a 512 x 512 SSAO buffer
- render to another 1024 x 768 texture for depth
- render to a 512 x 512 DOF input buffer

Five switches. With all those techniques, switching is enevitable. But at least you can order things better:

- render to 4 1024 x 768 textures for Deferred input
- render to another 1024 x 768 texture for depth
- render to a 512 x 512 SSAO buffer
- render to a 512 x 512 DOF input buffer
- render to a 256 x 256 texture for a light shadowMap

See that? Only 3 switches instead of 5.


Make a FBO for each resolution
Not 100% sure about this, but people say it's best to make a FBO for each possible resolution, instead of only changing the rendertarget for a single FBO. In the example above, we would need 3 FBO's; 1024 x 768 , 512 x 512 and 256 x 256. Each one can have it's own depth buffer.

Atlas renderTargets
Use bigger atlas textures to perform multiple passes in a single buffer texture. When having to blur or downscale, you quickly end-up with a large number of different resolutions. For example, the HDR technique requires to downsample the average luminance of the screen contents. At first, my engine would do this:
1.- Render luminance values to a 128 x 128 texture
2.- Downscale step 1 texture to 64 x 64
3.- Downscale step 2 texture to 16 x 16
4.- Downscale step 3 texture to 3 x 3

4 switches. But you could also perform everything in a single, larger buffer.

Only 1 switch, ow hurray! I also render all shadowMaps in a single large atlas texture by the way, but I'll give details about that another time.


After simply re-ordering the passes, to reduce the amount of FBO switches, the framerate was restored.

Don't worry. New content will come. In 2011!

Sunday, December 19, 2010

From Deferred to Inferred, part drie

The final chapter in this dramatic trilogy. We saw Deferred rendering having relation problems with his transparent cousins. Inferred Lighting suddenly made its appearance, trying to steal the hearts. But Ridge Forrester pointed out that the charming dr. Inferred has a dark side as well, and accused him of being a fake... Still, translucent Barbara has feelings for the Diffuse and Specular skills of this mysterious Inferred Renderer. How will this end?

Again, a shot with input textures for the Deferred / Inferred pipeline.


Right. As said I'm not really amused by the stippling and the detail loss that come with Inferred Rendering. But please don’t take my word on that. Judge for yourself, as it all depends on the scenery you had in mind. Yet I'm pleased with the separate pass for doing Diffuse and Specular lighting. For the 63th time, Lighting is one of the (technical) key ingredients for “good” graphics. Asides from having proper art resources of course. However, engines have so many effects these days, that it's hard to trace problems once the result isn’t quite what you expected. What you see on the screen is not just “albedo x sum(lights)”. We have:

- Specular lighting, the shiny gloss on metal, pottery, plastic, wet bricks or polished floors
- Reflections (cubeMaps, mirrors)
- Ambient light
- SSAO, DoF, Fog, noise
- Emissive textures
- And worse, HDR & Tonemapping messing around with the colors to bring it in a certain range
- And so on...

If the graphics suck, then what went wrong? Bad shaders? That's an easy answer, but in fact it's often a combination of overdone HDR, wrong reflection quantities, bad color contrast/saturation in the texture maps, or not-so-good light colors. The only problem is, it's hard to find the cause.

Obviously, with a separated Diffuse and Specular texture, it's easy to test if at least the basic lighting went properly.

See? Really useful. But maybe not a reason to change your rendering pipeline (again) though... Ok, then maybe I have a few other reasons that may convince you:

- Improved HDR Bloom (blur on bright spots)
- Diffuse blurring for special surfaces (human skin for example)
- Easy to enhance the light contrast / maximum intensity or limit the overall light result
- Can be used as input for other specific shaders

The HDR Bloom in my case is often too bright, and the overall screen is to dark because the Tone Mapper adjusted the eye on bright walls. Let me explain how it works (in my case):
1- Scene is rendered completely (including transparent crap and everything)
2- Average screen luminance is measured
3- Everything brighter than X, depending on the current eye adaption, will result
in a blur
4- Tone mapper scales the color range from the input texture (step 1) in 8-bit
screen range. Again depending on the current eye adaption level.

Say what? If you are looking outside, you can be blinded for a while since the sky and sun are much more intense than that dull light in your stinky office. Computers can fake this effect with HDR rendering. Back in the old days, a 8bit RGB color of 255;255;255 ("1.0" in shader terms), would mean "White". But what is white? Paper is white, but yet far less intense than the sun I guess. One trick to make a sun look brighter than a piece of paper, is to render a “bloom” / “blur” around the sun. But again, what exactly is bright? Depends how many lights are shining on a piece of surface, how bright these lights are, how much the material reflects, and eventually how much light the material produces itself (neon lights, TV screens, …).

The only problem with 8 bit buffers is that your value will stay clamped to: 1 + 1 = 1. ? Yes, because 8 bit can’t hold higher values than that. With High-Range buffers (16 bit or more), you won’t be limited anymore. High-Dynamic-Range lighting is just a way to make use of that advantage. You can sum many lights, or use bigger variances in the intensity values. Yet in the end the the results still have to be rendered on a 8-bit target; your monitor.

Scale the full-range colored scene into a lower-range target texture. The same happens with your eyes in reality. As you can’t see the full color spectrum, your eyes are adjusted to a certain level. In games that would be the average luminance of the scene you are seeing.


The problem with my HDR approach is that bright surfaces (such as paper or white wallpaper) quickly result in bright blurs and darkened scenes if there are a few lights shining on them. This is because that “average luminance” was simply an average of all the result-colors in the scene. So a bunch of white papers would quickly be considered as “very bright” when laying on a darker wooden table. But… did you ever get blinded by a paper? I was, but that's a whole different story.

The "Blur" / "Bloom" should only occur at highly emissive sources (lights), the sky, reflective surfaces (car metal, water), or extremely litten surfaces. Now that we have a diffuse and specular buffer as well, we can focus more on the specular (light reflected straight to your eye) quantities, instead of the color as a whole. The diffuse portion is ignored more by giving it a lower weight. That prevents blurs on white chalk walls or tax-papers. It also stabilizes the tone-mapper. When measuring the average luminance, I’ll ignore the specular light more. As specular lighting is very dependent on the eye position/direction, it can change every step you take, while diffuse light remains the same.

Abstract art by an upside down Chinese master? No, the input texture for the Bloom, using a brightpass filter.

Sure, it’s still as fake as Hulk Hogan defeating The Undertaker, but at least the overdone blooming and weird luminance peaks are reduced. I can’t show you proper final results yet, as I’m still struggling with the new pipeline. Not only the inferred approach was applied, had to fix bugs, clean up code. Also added a new method for storing shadowMaps, and a new framebuffer switching mechanisms (had a big performance drop suddenly, more about that another time). But ok, in the shot below the left wall and those pretty heads would be blurred in the previous pipeline. Now the blur is only applied at the tiled floor.

Other tricks you can do is a processing the light buffers before using them. Blur, contrast, saturate, maxOf, minOf, you name it. Do you really need that then? Uhm, maybe not. Though blurring diffuse can be interesting for special type of surfaces such as human skin. I'm not really into that yet, but techniques like sub-surface scattering / skin rendering seems to blur the diffuse light in some cases for a soft appearance. Makes sense. Take a magnifierglass and have a look at a girl. Plenty of bumps to do normalMapping, even through all that make-up. But still soft as silk, so you can't use standard normalMapping. It would make the head look like plastic or 300 year old stone. Well, I’m sure we’ll be looking into skin-rendering sooner or later.


All in all, there are probably other workarounds, but these 3 features convinced me. After all, without the stippling and DSF filtering stuff, this is only a small change in the rendering pipeline and the additional overhead isn’t that scary. And it becomes more attractive to have a try with the transparent filtering techniques from the Inferred Lighting paper… Nothing ventured, nothing gained.


Ow, I heard Santa might bring me a new videocard (after telling him my creditcard number).
Merry Christmas bastards!

Monday, December 13, 2010

I love it when a plan comes together

Where is part three of the Deferred/Inferred story? Well, last weekend I had beer-drinking duty on a little vacation with friends :p So, next week hopefully. Adjusting the rendering pipeline correctly (including some other aspects) is quite a struggle.

As for the game & next demo-movie, there are some interesting developments going on that I'd like to share. The chaos in the mailbox with people offering their help was over, but last week there were about 10 replies again. Varying from students who like to build their skills with this project, artists, web-designers, and also interesting; Pascal community members who asked if this project could become "Open Source"... With limited time and hands full on programming and keeping the three other team members busy I have to pick carefully though. Don't carry more than you can hold!


As said before, replying to all these mails is difficult for me. Not that I don't like answering mails, but I just don't want to sound like a jerk when refusing someone’s help. As the whole thing is based on charity basically… "Kijk een gegeven paard nooit in de bek"

In the ideal situation, one or two experts reply right at once for either sound, modeling, mapping or concept drawing. Two days later a team was formed… Right. In reality a mixture of qualities, styles and experience levels drop a mail once in a while, and you have to pick really carefully. Don't rush (difficult!!!), and don't put four men on the same task. If I would have let everyone in so far, we would already had about 5 character artists/modelers for example. While one or two is more than enough. Hey, we’re not making World of Warcraft here! I'm not an expert, but I guess mixing different styles and ideas is not a good idea anyway. Steering a ship with 8 captains at the same time...

Work from Jesse. Nothing to do with Tower22, but nice to show neverthless. Uhm... I still don't have new game-pictures anyway

Nevertheless, I was pleased with the offer from a concept-artist who mainly showed environment-art. Exactly one of the missing keys in our team setting so far. Hey, that whole skyscraper has to get filled right? From macro level (global overviews like the Zelda or Metroid maps) from micro (corridor atmosphere, bizarre environment ideas). Say hello to Jesse Maccabe as our Environment artist:
http://www.jessemaccabe.com/

Now that I'm thinking about it, I guess it's fun to let the modelers/artists/writers and sound composers write "how they do stuff" here on this Blog once a while. A little view in the Tower22 kitchen, asides from spicey programming with Gordon Fuck! Ramsey.

And… we’re getting help from another person on the main character of the game. Worked on a couple of games, including F.E.A.R. 3. Currently teaches at the Full Sail University. Hands up for Robert Brown:
http://www.robertakbrown.com/
Right now we are brainstorming how the player character should look like. No, I’m not telling anything yet, but that rusty robot has to be replaced obviously. Started to malfunction anyway. Well, with six people in total now, it’s time to shut the cry for help, as we are pretty much complete. For now. In the future we probably need a map & asset modeler, but… let’s first make a second demo movie ok? All in all, I can’t complain! Seriously, I really didn’t expect to get so many reactions, and certainly not from people with this kind of experience!

Robert's golden handshake

About Open Source… This game is made with ancient alien techniques so far. Delphi 7, OpenGL1.X and Windows XP Paint of course. Anyway, the Pascal(programming language) got interested because of that of course (thank you!). All those C++ boys keep telling that Delphi sucks, here, eat that sucker  But seriously, some were interested in the code. How to reply on that? Since I'm using quite a few free tools, and learned most of my skills from free tutorials/demo's/projects, it would be a little bit selfish to keep the cake for myself right? Yet I refused for now. Why?
- No time (to do it properly)
- This Blog gives some learnful information, hopefully. Using that to repay my debts :)
- I worked hard on it for many years. Sorry, but giving it all away right away...
- What if... this project would actually get a chance to become something more serious?

Basically I have no problems with Open Source. In fact, if T22 would be released tomorrow, you are free to look in the code in the next week. Kinda sympathized id Software for opening their Quake2 code (years after though) for example. But the main focus is to have fun and to create a game/movies for now. I simply don't have time for side projects, teaching students (I would like to though) or very detailed tutorials on this Blog. Sorry!

And wouldn't it be stupid to give it all away if this project might get a "commercial chance" in the future (after making more movies)? I'm not much of a materialistic guy (give me a chair and a computer, that's enough), but if making a living with doing what you like most is something everyone likes. I can't look in the future, but watching your steps is always wise.


Last but not least, we have a second movie idea. Quite a lot more complicated than the first one. Not in size, but it requires more interaction, showing some actual action-puzzling elements the game should have. And a far more bizarre creature than the Meathook guy… Since there are artists now, I'm hoping to post a couple of teasers in the next few months!

Sunday, December 5, 2010

From Deferred to Inferred, part deux

Rest in peace Frank Drebin!

Bleh, still no new pictures to post. Believe it or not, but our little girl tought it was an excellent idea to throw black ink all over the keyboard. So, my dev-machine is out of order until I have a working keyboard again, and I was too lazy to copy the whole project to my laptop. Pics or no pics, let’s continue our Deferred / Inferred rendering story ok?

Stop smiling you evil dwarf!

One week ago I explained Deferred Rendering / Lighting. To end with a couple of issues. There is a solution for everything in this world, but translucent rendering in a Deferred pipeline… With translucent stuff I mean glass, fences, Doom2 monster sprites, tree leaves, grass, and table cloths with a complex pattern, stitched by grandma.

Allow me to explain the problem. In a Deferred Rendering solution, each pixel from the (screenview) buffers contains information about one, ONE, pixel you can see. Info such as position, color, and its facing direction(normal). But what happens if you have 2 layers behind each other? Let’s say you are looking through a gelatin pudding… The gelatin has to be litten, but also the object behind it. But due the very limited storage space per pixel (16 scalars in 4 texture buffers), we can only store info for one piece of surface (pixel). And no, blending is not an option. Colors can be blended, but not positions or normals.

Dude, then simply create two sets of buffers! One of the opaque surfaces, another set for the transparent portion! Hmm, not that simple. Would work if there are exactly two layers behind each other, but in complex scenes such as your typical Crysis jungle, it could just as well be 10 layers. Think about grass billboards. So… can we throw away Deferred Rendering already?

The common solution is to render all the opaque stuff first, then to switch over to traditional “Forward Rendering” to render the remaining transparent geometry. The goods news is that the transparent part isn’t that much usually, as grass, foliage or fences are merely simple quad shapes. The bad news is that you have to implement two different types of rendering, making a messy whole. Plus you either have to sort out geometry again, and/or suffer lighting quality on the transparent parts. Multi-pass rendering on transparent surfaces can be tricky as well, as the type of blending can differ. Some use additive blending, others multiply or perform a simple alpha-test only. My engine “fixes” the problem by activating the most important lights per sector, and then render the transparent geometry in a single pass with all lights applied at once.

Inferred Rendering to the resque!? … The transparency issue was one of the motivations to make an adjusted variant called Inferred Rendering. But does it fix the problem? In my opinion; not really unfortunately. But I still have to try it out further (and I need working keyboard  ). Because it probably depends on what you are trying to do. Anyhow, it has some other interesting features though. But first, let's compare the pipelines:

For extended info about DSF and such, see the links at the bottom
The main differences are the separate lighting pass, and rendering the transparent surfaces into the “info buffers” by stippling them between the opaque pixels. That means there is actually less info available for each (resulting in a somewhat lower resolution unless you render on up-scaled buffers). The DSF edge filter technique smoothes the edges and fills the gaps again though. But yet, all forms of interpolation means quality loss in the end.

The good thing is that transparent geometry can be done in exactly the same way. No different shaders, no light sorting crap, and potentially a lot faster when having many lights + many transparent surfaces. Another small bonus for somewhat limited or older hardware is that we can possibly do with one less info buffer in the first pass, as the albedo color can be rendered later on. Don’t be fooled though, the lighting and DSF passes still require additional energy and extra buffers. Last but not least, the edge correction gives you some sort of Anti-Aliasing, which means less pixilated edges. By nature Deferred Rendering doesn’t have AA, another nasty little issue.

But it still doesn't really work when having, let's say, 10 grass billboards behind each other. As you can guess, that buffer still has a limited set of pixels. Depending on your stipple pattern, you could make 2 or 4 layers for the transparent geometry. Then sort out all transparent entities and tell which layer (stipple pattern) to look at when rendering them. YES, you need to perform Z-sorting to do this, but in case you have many transparent surfaces, you should be doing that anyway. But having 2, 4, or 6 layers for that matter, is still not much. Either you have to skip surfaces (which ones?), or accept the rendering bugs. Plus as mentioned before, you will miss small details (problematic for detail normalMapping) as pixels got offered when sharing the same buffer for multiple layers.

Why bothering then? Well…

- Unless you are rendering jungles or glass villa’s, how big is the chance you have more than 4 transparent pixels behind each other? Particles BTW can still use a simplified lighting method in a pass afterwards, if they need lighting at all.
- Having a separated light-pass got my attention.

Inferred Rendering produces one or two textures (depending if you want colors or intensity only for specular); Diffuse Light & Specular Light. The good thing is that these buffers do NOT contain dozens of other tricks such as emissive light, reflections, ambient or the surface material colors (albedo texture). That allows a couple of useful tricks, including improving HDR Bloom and debugging your lights. But boys and girls, that is for next week. Either you plan to use Inferred Rendering or not, this not-too-difficult and not-too-long paper is a comfortable read:
Inferred Lighting paper by Kircher & Lawrance
And some more details + DEMO/SOURCE by Matt Pettineo:
Dangerzone


And if you wondered why there was a pot of black ink on the computer desk? Well, I had to draw the new Dutch prime minister, Mark Rutte. As a birthday present for my little brother. Not that he is a Markie-fan in particular, but since I gave him a poster of our prime minster Balkenende 3 years ago as well... ;)

Sunday, November 28, 2010

From Deferred to Inferred, part uno

How’s things going with the A-Team? Well, still no mapper but I’m happy to introduce you these 3 men:
Writer, Brian
Our Hank Moody from California. Has many years of experience on several projects, studied Anthropology (very useful for background research), had a career as musician, and is currently writing a novel. Find out more at this blog:
brianlinvillewriter

Sound & Music, Zack
DJ Zack found some free hours to put in this project. From threatening music to flushing toilet sound effects. Already did the audio before for some (horror movies)!
Soundcloud

3D (Character) modeler, Julio
His surname and Spanish background makes him sound as a hot singer, but the man is actually stuffed with horrific rotten ideas + the skills to work them out. Mothers, watch your daughters when they come home with a Spaniard!


Now that there is a small team, we can focus on the next targets… Such as fishing, paintball, playing board games, and other cozy group events. Ow, and maybe we’ll make a second demo movie as well…

The couple of votes on the Poll show that there is an interest for showing some more technical details. Hmmm… not completely surprising with the Gamedev.com background. Now I won’t turn this blog into something like Humus3D or Nehe. No time, and there are far better resources on the internet for learning the theory or implementations. However, I can write more details about the upcoming techniques that are implemented in the engine of course.


To start with some adjustments on the Rendering Pipeline. As you may have read before, the engine currently uses Deferred Rendering. I’ll keep it that way, but with some tricks lent from another variant: “Inferred Rendering”. In case you already know what Deferred Rendering is, you can skip this post. Next week I’ll be writing about Inferred Rendering. Otherwise, here’s another quick course. I’ll try to write it in such a way that even a non-programmer may understand some bits. Ready? Set? Go.


==========================================================

Good old Forward Rendering:
Since Doom we all know that lighting the environment is an important, if not the most important way to enhance realism. Raytracers try to simulate this in an accurate way by firing a whole lot of photons through the scene to see which ones bounced into our eyes. Too bad this still isn’t fast enough for really practical usage (although it’s coming closer and closer), so lighting in a game works pretty different from reality. Though with shaders, we can come pretty far. In short:
For all entities that need to be rendered:
- Apply shader that belongs to this entity
- Update shader parameters (textures, material settings, light positions…)
- Draw entity geometry (a polygon, a box, a cathedral, whatever) with that shader.

The magic happens on the videocard GPU, inside that shader program. Basically a shader computes where to place the geometry vertices, and how to color it’s pixels. When doing traditional lighting, a basic shader generally looks like this:

lightVector = normalize(lightPosition.xyz - pixelPosition.xyz );
diffuseLight = saturate( dotProduct( pixelNormal, lightVector ) );
Output.pixelColor.rgb = texture.rgb * diffuseLight.rgb * lightDiffuseColor.rgb;

There are plenty other tricks you can add in that shader such as shadows, attenuation, ambient, or specular light. Anyway, as you can see you’ll need to know information about the light such as its position, color and eventually the falloff distance. So… that means you’ll need to know which lights affect the entity before rendering it… In complex scenes with dozens of lights, you’ll need to assign a list of lights to each surface / entity / wall or whatever you are trying to render. Then the finally result is computed as follow:
....pixelColor = ( light1 + light2 + ... + lightN ) * pixelColor + … other tricks

- Enable additive blending to sum up the light results
- Apply lighting shader
- For each entity
Pass entity parameters to shader such as texture
For each light that affects entity
Pass light parameters to shader (position, range, color, shadowMap, …)
Render entity geometry

This can be called “Forward Rendering”. Has been used countless times, but there are some serious drawbacks:
- Sorting out affected geometry
- Having to render the geometry multiple times
- Performance loss when overdrawing the same pixels

First of all, sorting out which light(s) affect, let’s say a wall, can be tricky. Especially when the lights move around or can be switched on and off. Still it is a necessarily, because rendering that wall with ALL lights enabled would be a huge waste of energy and probably kill the performance straight away as soon as you have 10+ lights. While surfaces are usually only affected by a few lights only (directly).

In the past I started with Forward Rendering. Each light would sort out it’s affected geometry by collecting the objects and map-geometry inside a certain sphere. With the help of an octree this could be done fairly fast. After that I would render the contents per light.



Another drawback is that we have to render the entities multiple times. If there are 6 lights shining on that wall, we’ll have to render it six times as well to sum up all the individual light results… Wait… shaders can do looping these days right? True, you can program a shader that does multiple lights in a single pass by walking through an array. BUT… you are still somewhat limited due the maximum count of registers, texture units, and so on. Not really a problem unless you have a huge amount of lights though. But what really kills the vibe is the fact that each entity, wall, or triangle could have a different set of lights affecting it. You could render per triangle eventually, but this will make things even worse. Whatever you do, always try to batch it. Splitting up the geometry into triangles is a bad, bad idea.

Third problem. Computers are stupid, and so if there is a wall behind another one, it will actually eventually draw 2 walls. However, the one on the back will be (partially) overwritten by pixels from the other wall in front. Fine, but all those nasty light calculations for the back-wall were a waste of time. Bob Ross paints in layers, but try to prevent that when doing a 3D game.


There is a fix for everything, so that’s how Deferred Rendering became popular since, let’s say 5 years. The 3 problems mentioned are pretty much fixed with this technique. Well, tell us grandpa!


Deferred Rendering / Lighting:
The Deferred pipeline is somewhat different from the good old Forward one. Before doing lighting, we first fill a set of buffers with information for each pixel that appears on the screen. Later on, we render the lights as simple (invisible) volumes such as spheres, cones or screen filling quads. These primitive shapes then look at those “info-buffers” to perform lighting.


Step 1, filling the buffers
As for filling those “info-buffers”… With “Render Targets” / FBO’s, you can draw in the background to a texture-buffer instead of the screen. In fact, you can render onto multiple targets at the same time. Current hardware can render to 4 or even 8 textures without having to render the geometry 4 times as well. Since textures usually can hold 4 components per pixel (red, green, blue, alpha) you could write out 4x4 = 16 info scalars. It’s up to you how you use them, but my engine does this:

There are actually more sets of buffers that hold motion-vectors, DoF / SSAO info, fog settings, and very important; depth.
Howto render to multiple targets?
- Enable drawing on a render target / FBO
- Enable MRT (Multiple Render Targets). The amount of buffers depends on your hardware.
- Attach 2,3 or more textures to the FBO to write on.
- Just render the geometry as usual, only once
- Inside the fragment shader, you can define multiple output colors. In CG for example:
Out float4 result_Albedo : COLOR0,
Out float4 result_Normal : COLOR1,



Sorry for this outdated shot again, but since I can't run the engine at this moment because of Pipeline changes, I wasn't able to make new shots of course. By the way, the buffer contents have changed since my last post about Defererd Rendering, as I’m adding support for new techniques. Anyhow, here is an idea what happens in the background.


Step 2, Rendering lights
That was step 1. Notice that you can use these buffers not only for (direct) lighting. Especially depth info and normals can be used for many other (post) effects. But we keep it to lighting for now. Instead of sorting out who did what and where, we purely focus on the lights themselves. In fact, we don’t render any geometry at all! That’s right, even with 5 billion lights you only have to render the geometry once in the first step. Not entirely true if you want translucent surfaces later on as well, but forget about that for now…

For each light, render it’s volume as a primitive shape. For example, If you have a pointlight with a range of 4 meters on coordinates {x3, y70, z-4}, then render a simple sphere with a radius of 4 meters on that spot. Eventually slightly bigger to prevent artifacts at the edges.
* Pointlights --> spheres
* Spotlights --> cones
* Directional lights --> cubes
* Huge lights(sun) --> Screen quad

Asides from the sun, you only render pixels at the place where the light can affect the geometry. Everything projected behind those shapes *could* be litten. In case you have rendered the geometry inside this buffer as well, you can tweak the depth-test so that the volumes only render at the places where it intersects the geometry.

Now you don’t actually see a sphere or cone in your scene. What these shapes do is highlighting the background. Since we have these (4) buffers with info, we can grab those pixels inside the shader with some projective texture mapping. Then from that point, lighting will be the same as in a Forward Renderer. You can also apply shadowMaps, or whatsoever.

< vertex shader >
Out.Pos = mul( modelViewProjectionMatrix, in.vertexPosition );

// Projection texture coordinates
projTexcoords.xyz = ( Out.Pos.xyz + Out.Pos.www)*0.5;
projTexcoords.w = Out.Pos.w;

< fragment shader >
backgroundAlbedo = tex2Dproj( albedoBuffer, projTexcoords.xyzw );
backgroundNormal = tex2Dproj( normalBuffer, projTexcoords.xyzw );
result = dot(backgroundNormal.xyz, lightVector ) * backgroundAlbedo;

Since every pixel of a light shape only gets rendered once, you also fixed that “overdrawing” problem. No matter how many walls there are behind each other, only the front pixel will be used.

Thus, no overdraw, only nessesary pixels getting drawn, no need to render geometry again for each light… One hell of a lighting performance boost! And probably it results to even simpler code as you can get rid of sorting mechanisms. You don’t have to know to know which triangles lightX affected, just draw its shape and the rasterizer will do the magic.



All in all, Deferred Rendering is a cleaner and faster way to do lighting. But as always, everything comes at a price…
- Does not work for translucent objects. I tell you why next time
- Filling the buffers takes some energy. Not really a problem on modern hardware though.
- Dealing with multiple ways of lighting (BRDF, Anisotropic, …) requires some extra tricks. Not impossible though. In my engine each pixel has a lighting technique index which is later on used to fetch the proper lighting characteristics from an atlas textyre in the lighting shader.


Does that “Inferred Rendering” fix all those things? Nah, not really. But I have other reasons to copy some of it’s tricks. But that will be next time, Gadget.

Sunday, November 21, 2010

Revenge of the nerds

What can I say? The movie was received very positive pretty much everywhere. Of course a few improvement tips here and there, but overall I couldn't whish for better reactions! Especially the "creepy", "goose bumps" and "peed my pants" is heart-warming! After all, it is a horror game so for me delivering goose bumps is like delivering pizza for a pizza courier. Core business. So... A BIG thanks! Really, feedback (including criticism) from you is the (red)diesel for projects like these. Ow, and while browsing around, I saw the project was even mentioned at an online game-magazine, http://www.thegamingvault.com. Pretty cool huh?

* Ow, and maybe funny to ask. Did you find the tiny “Easter Egg” in the movie? Tip: “I am a liar!”


But enough ego trippin’. This is just the beginning. I'll have to resume code-stamping, and trying to ensemble a team to lift this project to a next level. Speaking of which... Holy crap. One day you have nothing, wondering if someone will ever contact you at all. Next day the mailbox is filled. Making a movie and asking on gamedev wasn't a bad idea (so if you ever want to start up a project, you could follow this way).

I was kind of nervous before posting the movie and a "Please help me, I'm a little puppy with big brown eyes" post. I'm a naive Goofy, who can't say "No.". That's very sweet, but that attitude can bring you in serious problems when it comes to “doing business”. I knew IF people would start applying for the project, I would have to:
- Watch out for those who want to ride on your work, making profit of it or even stealing idea's
- Kindly reject help if it doesn't fit in the project
- Decide who's in and who's not

The whole game concept is a fragile thing, a Ming vase. As it depends largely on the story and a few not-seen-before game mechanics, I really have to watch it. If I would tell all my idea's, EA games or another big Guy who can produce games on the fly will have it tomorrow on the store shelves. Or the story just doesn't work anymore because anyone with interest for the project already knows the clue. Are you going to watch a movie if you already know the end? If it was Predator, Rambo or Robocop... YES of course. But this genre is different. You don't play Silent Hill for entertainment. For some it's even so fuck'n scary that you don't even want to play it. But the story has to be found out, it's your duty.

The beautiful thing of internet is that people come in touch easily. Just look at any blog or forum. Americans discuss with Iranians about games. Vikings share their jokes with South Africans, and an Asian gets compliments from a Marsian. But at the other hand... you don't really know who's on the other side. You all think I'm Rick from Holland, but in fact I am prince Mambassa al Bedala III from the African federation.



As for rejecting people that offer their help, that also hurts. You simply can't invite everyone for the party. But having to say "Sorry No" is difficult. I never want to hurt someone feelings by telling "you're not good enough". Certainly not when they are offering their precious time, for FREE! Don’t want to sound cocky. Nevertheless, Microsoft won't put everyone on the Windows 8 core code either, just out of compassion. Now Microsoft pays, while I can't. But that doesn't mean I wouldn't have serious plans.

From my own experience, amateur/hobby teams often die early because of taking too much ballast at once. The whole development has to go slick, like an oiled machine. But you can't expect to manage a 15-man army with different skills, personalities and working pace if you are busy yourself. I have work, extra work, girl + kid, friends and my own programming tasks on this project. That's why I would like to start with a small team. And if possible, keep it small. So basically, I have to make decisions. Our little boat only has a few seats!



So, what’s the status doc? I’m still talking with a couple of guys, and if everything goes well the project will be extended with:
- Story & biography writer(s) / Game concept document
- Sound & music composers
- 2D texture artist
- 3D (character) artist (for making the next ugly bastard in a demo movie)
We’re not complete yet, as I would like to have a (2D) concept artist, mapper and 3D asset modeler. But there is still a thread running on Polycount, so I have to fish patiently. Yeah, patiently… If I could do it over again, I would send each replier an automated “sign up” form generated by C3PO, asking for their portfolio and some other common questions. Then wait X weeks to sort all the forms out, and reply to the winning tickets… Ehm, sounds so impersonal. But I also hate to be like an “American Next top Model(er)” jury member. Difficult, difficult.

All in all, this is a whole new learning traject. Now that there are people kind enough to offer their (free!) services, I must grab the chance with both hands. The good thing is that I can focus more on the programming part again, and hopefully these guys will create far better work than I could do. But don’t forget you will also have to lead these people now. Don’t have to be a babysitter, but each individual wants to be taken seriously. Which means you’ll have to provide enough fun&interesting tasks, and giving constructive feedback (including the ability to be honest when the delivered work isn’t that good).

Am I ready for it? Well you can’t prepare yourself perfectly for everything in life. Just like becoming a daddy for the first time, you will never have that feeling “Think I’m ready to make one now”. Sometimes you have to stop thinking and just do it. Set sail mateys!

Saturday, November 13, 2010

Movie!

Grab a 5 liter cola, put on your 3D goggles, and get yourselves some ear protectors to survive the noisy sound. Yes, the quality is not superb (I didn’t try the DVD recording thing yet), but at least there is finally something to look at. Now the verdict is up to you, shoot me! But please don’t kick me in the balls ;)

Well you can’t expect too much from a half finished engine + some programmer art. But since this is supposed to be a horror game, I hope this little movie gives you a few sweaty thrills. Just a little bit. The problem with being the “producer” of such a thing is that you really don’t know anymore whether your product is good, bad, scary, or just plain stupid. After programming, playing, watching and thinking about it at least 7 million times, I really don’t have a clue what others may think. Yes of course I tested it first on a big television with my girlfriend, but she is also afraid of tiny spiders, thunders 200 kilometers away, me, cheese and everything else. So that’s not a real good reference.

For that matter, I’m quite relieved its finished now. Not only because it is a first milestone, also because I can finally move on to something else. I haven’t been really programming on bigger techniques last 5 months, such as AI or a new ambient lighting system.

Don't know if it matters, but it was encoded with H.264, MP4 format. In case you can't play it or something, please let me know. I have zero experience with online movies and stuff.

In the meanwhile, let’s see what this movie does. As I have told a few times before, one of the goals with this blog and demonstration movie is to hopefully get a few talented artists. So I can focus more on the technical part, while the creative guru’s are make dreams come true. Just for a start, a mapper/modeler and concept/texture artist can make life so much easier. I’d love to see what a next movie would do with the touch of some true artistic talent.

Maybe that is wishful thinking, but you’ll never know if you don’t give it a try. One way or another, if I ever want this project to grow to something more serious, I can’t do without them. The days of programming Pac-man on the attic are over.

In case you are reading this as a programmer; yes I’ll probably need a few experts on specific matters(physics, IK, sound for example) as well, but not yet. Maybe nothing will change in the next months, but if, IF one or two artists come in touch, I’ll first want to put my energy on that. From other amateur-hobby-projects I’ve learned not to choke yourself in your own enthusiasm! Be easy, don’t rush! First things first.


Finally, is there something to say about this movie? Well those who have read this blog the last months probably have seen most of the visuals and techniques already. However, I think I should add that some of the contents were used from other games:
- Halflife 2 some of the wall and floor textures, a few footstep sounds
- Doom 3 many of the sounds, a few blood decals
- Crysis Warhead Concrete texture

To be very honest, I have no idea whether this is illegal or not. Yet another reason to search for some artists quickly, so we can replace that content (that means it also needs a sound engineer in the future)! How about the TO DO’s? Allright, here’s the 2011 plan:
• Hopefully find one or two talented artists, and create a second movie
• AI module (improving navigation mesh, task system, …)
• Realtime Ambient 2.0
• Volumetric fog & lightshafts
• Adjusting the rendering pipeline to something like Inferred Lighting
• FMOD sound, maybe
• Switching over to Newton 2.0 (still using the older one)
• And tweak about anything else

Ok, I’m going to rest for a moment. All coding and no play makes Jack a dull boy.

Sunday, November 7, 2010

Leaked sex-tapes!

With that title, I'll probably have 10 times as much hits. But no, let's talk about recording games.

Well... what a drama. I should have know it. Anti-climax. Done with programming(almost), 3D models ready, everything works... except the recording part. There are quite some tools on the market that will capture video and audio while playing a game. FRAPS, KKapture, ZDSoft recorder, Wegame, xFire... and probably I forgot a dozen more. From what I read, FRAPS belongs to one of the best options, although it costs a few bucks ($37 dollar, fair enough). My first attempts to record something were awful though.

KKapture didn't work right from the start. Maybe because the game isn't actually full screen yet (embedded view in the Editor). Wegame works pretty well… except that the keyboard input is suddenly missing. Now that is kind of a problem. Then there was FRAPS. Three issues:

- Sound crackles. A bombarded world war I gramophone player sounds better.
- The file quickly becomes 4 gig... which will halt the record because my old FAT32 file system doesn't allow bigger (same issue as with the maximum 4 gigs of RAM on a 32 bit system). FRAPS doesn't compress much to safe speed and quality, so that means massive files.
- Performance... Scientists calculated that
"heavy duty game + recording = 300 KG lady rolling in sirup".
The framerate dropped to 18, maybe 20, at it’s best.

Now what? Record it with the handycam from screen? Maybe I'll do that if this game will ever reach the E3 or something. To create a hype with so-called “illegal recorded content!” :). Nah, that's not going to work. So, if you are in problems, then who are you going to call? Ghostbusters? No the A-team of course, Fool. Here some advice:

- If you are missing audio, make sure none of the related Windows volume bars are muted.
- Reduce the volume bar(s) used for recording. This can take away quite a lot crackles. Just prevent very loud noise.
- Stream the video to a fast hard-drive. If possible, another drive than you are using for the game, Windows, virtual memory, etc.
- 20 or 25 frames per second is enough to create a movie. It won't be HD or anything, but yet better something than nothing at all.
- If the 4 gig limit is a real problem, you may want to step over on a NTFS file system instead of FAT32 (if you are not already using it).
- And my true savior; run the game at a lower screen resolution. 1024 x 768 for example. Less pixels = smaller video files, less recording requirements, and maybe even a somewhat faster game, making it easier to control the whole thing. Someone still has to ride the camera right? Yes, the quality will be lower, then again pixels are smeared/blurred after compressing anyway.

With these tricks, I can let FRAPS record "good enough". Still a little bit jerky and low quality sound (especially for high-volume stuff), but it is somewhat acceptable. I'm no expert with this stuff, but tools like VirtualDub may help you to filter out some of the crap further afterwards.

Part of the movie is a proper flow of events of course. At 00:32 Jean Claude van Damme walks in, etcetera. The debug-text is to see which triggers went off. Ow, if you wonder who "Johny" is, I had to give that poor guy a name. Sounds more comfortable than "monster1" or "beast_on_hook"


A real good solution would be recording via an (external) device of course. Unfortunately my "studio", erm, little corner in the living room, doesn't have such professional hardware. Except for the 50 year old tape recorder I got from my father once. Mom and dad DO have a DVD recorder though. So I was thinking...

My videocard, GeForce 8800 GTS 640 MB, has a DVI output. The recorder, which normally picks its signal from a television, has SCART. Youtube & Google "dvi to scart" gives all kind of ideas to try out. Maybe I can directly connect the recorder with the videocard & soundcard output. Or otherwise indirectly via a (modern digital) TV. That is certainly worth a try. If it succeeds, I'll tell how I did it.

Now, let's hope I can put a movie here next week. It’s about time don’t you think? I’m already practicing the script (the route I’ll walk, and where to look at).

Monday, November 1, 2010

A long time ago


Hey, soft particles. Where did they come from?

Thanks to the profiles from the followers here, I noticed there are quite a few blogs that tell a fictional (game) story. Like a book, adding a new chapter each week(or month). Or like Halflife2, adding a new episode each... hundred years. Come on man, hurry up already. Anyway, how about this? Put on your pyjama’s, grab some hot chocolate milk, and ensemble around the fireplace. Grandpa is going to tell.

---------------------------------------------------------------------
Day 7, Sunday 05:01 AM
---------------------------------------------------------------------
Didn't forget to turn off the alarm, but I woke up anyway. Normally I would feel relieved and turn around with a little smile. “Just a mistake, it's weekend", that wonderful feeling. But I stood up and left bed. Sleeping here doesn't feel comfortable. There is no light in the bedroom, it's warm, moist, oppressive. Even after a week the whole apartment still feels strange. Not hostile, but certainly not like home either.

Weekend, Sunday, my day off. Maybe some relaxation will break the tension. Not that my new job has been hard on me though. Mopping some floors, painting a wall, replacing a light bulb, locking the hall doors at 23:00. Or delivering post. As a caretaker of this building, I make long days, but there is no pressure. So far I haven't seen a chief or boss anywhere. Only a few instruction letters and a couple of calls from the... boss. Don't know his name really.


Thinking about it, I haven't seen a single soul the entire week at all. No co-workers, no residents. No elevators going up or down. No one looking in his mailbox, no one going to work. Nobody. The apartment at the right from me seems to be uninhabited. And the apartment on the left... well there is none. There is no door where it's supposed to be.

Now maybe my floor just isn't really occupied. I haven't been at the lower floors yet. There should be a few shops and small restaurants there. Yet, strange enough I smell, see and even hear traces of life. No matter how many times I clean up, there is new litter every day. In my own hall, the "old woman" painting has been replaced with a windmill. And even my own mailbox gets filled by... some one. But more than that, I can hear them. Television shows behind locked doors, gramophone music, whistling kettles, creaking pipes, footsteps. Ironically, all the silence makes you hear the smallest details, giving a whole new world of noise.


Yes I could use some distraction before I start seeing things. A walk in the park maybe. If there is one nearby. Didn't have a proper look from my balcony yet, but I can't see the streets. Too dark and too much fog this autumn. To be honest, I have no idea how high I really am. The stairway shows a number '1' at my floor, but two floors lower it's number '12' again. All I can see from here are the upper-floors and rooftops of other apartment blocks. And come to think of it, I haven't seen much of living there either. As I write, I'm looking right now through the kitchen window. The only lights that shine in that building across are the cheap corridor chandeliers...
Then again, it's Sunday 05:12 right now.
------------------------------------------------------------------------------

Excuse me for the writing style, English is hard enough already, let alone writing a story. But it should give a few details about the game story / setting. Hope you like it.



See that? This cloud of crap is just a bunch of quad-sprites. Nothing fancy, however, see the difference? In the left picture the quads clearly intersect with the environment (floor, walls, ceiling). In the right one they still do, but a little trick hides this ugly artifact.

Now the technical portion for this week. Although this isn't really urgent at the moment, it has been added anyway; Soft Particles. All the cool kids in town already have them, Unreal5 has Soft Particles?! No way! See? Can't do without.

Luckily, implementing this technique is as easy as smearing peanut butter on your head. For a change, you won't need 6 different passes, new types of geometry data, or complicated shaders. The only thing you need is a depth-map. If you are using DoF, SSAO or other (post)effects, chances are very big you already have such a texture. If not, you should really start making one. A depthMap rendered from the viewer perspective is like potatoes or rice, a basic ingredient for many dishes.

How does it work? Just render your sprite as usual, but include the depthMap. At the end of the pixel shader, you compare the scene depth with the particle depth. If the difference is small, it means the particle pixel is about to intersect the environment. If so, fadeout that pixel with

< vertex shader >
Out.vertPos = mul( modelViewProjMatrix, in.vertPos );
Out.projCoords = (Out.vertPos.xyz + Out.vertPos.www) * 0.5f;


zScene= tex2Dproj( depthMap, in.projCoords );
float difference = saturate( (zParticle - zScene) * factor );
// The smaller the factor, the bigger the fadeout distance.

// Now the nVidia paper I read about this adds a little more juice with this // smoothing function:
float softF = 0.5f * pow(saturate(2*(( difference > 0.5) ? 1- difference: difference)), curveStrengthFactor );
softF = (difference > 0.5) ? 1-softF : softF;
result.alpha *= softF;

And that's pretty much it. Not a next-gen graphic blaster technique, then again it comes at a cheap price, and obvious intersecting particles is so 2006...


Movie? I need 3 more sounds, make one shader, and tweak the camera. But more difficult is getting the damn thing recorded. FRAPS is a little bit slow, and the recorded sound is even more crunchy than your grandma’s unshaved legs. You'll be updated soon.

America's funniest home videos

Monday, October 25, 2010

Ghettoblasters

Currently I’m finishing the cinematic/scripted events. That means tweaking the timing of triggers, putting the camera on the right positions, AND, gathering sound effects. After all, where is the thrill without any sound?

I don't have to tell you that movies and games need audio effects and music to create a dramatic scene. Horror and thriller movies can't do without an angry synthesizer while that clumsy stupid woman is exploring that dark house with a kitchen knife, in her night-dress of course. When talking about game-programming, you probably think about graphics, but having a groovy sound is at least that important. Hence, I dare to say it might be even more important for some game genres. Cool graphics still fall when your monster moans like a little girlie. Some shooters completely fail just because the weaponry sound like an air pellet rifle. And even without much visuals you can still create a bone chilling climax when using the right eerie, psychedelic sound. Yet, strange enough games are rarely praised for having supreme audio. Probably because it’s less of a technical challenge. However, composing the right tunes is far from easy.


I'm afraid I won't be able to ship this movie with all the proper sound effects I would like to have, simply because I can't produce them myself. I can draw some textures, do models or even make a few (stiff) animations, but audio is a black hole. Except that we made some house music by recording farts and pasting them in a bouncing beat with Fruity Loops. Years ago. But that is not really going to help I guess. Another options is to browse the Internet. You can find quite a lot when browsing the web for free sounds, or even buy complete CD's with SFX for a reasonable price:
- http://www.flashkit.com/soundfx/
- http://www.findsounds.com/

But this scenery is so specific and based on dreamy unreal situations that the default sets of M16 gunfire, yelling grandma (CrazyRingtones®), Ferrari engine, sweet puppy barking, baby burps, or jackhammer & sweaty construction worker is not the stuff I need. I'm not much of a sound composer, but the things I have in mind are... how to explain it?

The environment has to play mind-tricks with you, so crazy ambient loops are key ingredients. The environment is silent and desolate, yet you hear lots of unexplainable, weird things. As if the building "talks". Getting mad of all those voices and chaos in your head and stuff. As for the monsters, I like them to shut up for a change. No catchy phrases such as "GRRR!!!", "AARH!", "See you in Hell!" or "Die!". The monsters aren't really mad at you to start with, so they just chatter unintelligible gibberish. If they make any sound at all. Maybe the best comparison might be the Silent Hill series. At least, that's what I would love to see, uhm, hear. But so far I'm completely dependent on whatever sound I can find.

A little bit difficult to post sound screenshots. So you'll have to do it with this... kitchen.

So, my best chance was having a peek into the Doom3 sound library. Hundreds of horrific ambient samples such as metal bending, demon computers, buzzing lights, monster moans, smooshy fleshy jerky sounds, evil praying monks, and so on. Now Doom3 is a lot more noisy and satanic than I’m aiming for, but there certainly are some cool effects to find here. The entire game is stuffed with threatening ambient loops, crazy machinery and other hellish sound. So I had a look in their *.PAK file. I know that is stealing, but I'm just an innocent boy, only playing, fingers crossed. Let's hope I can find a good sound engineer one day. But then again, how the hell can I attract a qualified boy/girl with a dull soundless movie? Precisely. Forgive me John Carmack & co. By the way, it Was Doom2 15 years ago which got me thinking about making games in the first place, so don't blame me :)


All in all, that was quite a different ride. After gathering a collection of (creepy) sounds, I started to write a storyboard. A sound storyboard I mean. Close your eyes and imagine walking your way through those hallways, then paste the right ambient loops or “shock” effects… Not easy! It's not just grabbing whatever sounds cool, the whole composition has to align. You can't mix Coldplay with Rammstein and Jive Jones either (please God no). Where is Mozart when you need him?

Serving cold beer right here. Or maybe better not. The reason why I posted on monday is, as usual, because of some epic hangover last weekend. Burp, think I feel it coming again...-running to the sink- -clattering sound effects, with meatballs-

Talking about sound & games.... Don't you guys miss some kick ass quality soundtracks in (action) games? Most games these days are merely using vague ambient background loops. That would include what I have in mind. Quite different from the catchy Super Mario tunes, the heavy metal tracks in Quake2 (Sonic Mayhem) or Carmageddon (Fear Factory). Spacy house music in Conquest Earth (Eat Static) or . Quite a pity. What is the first thing when you think about such games? Exactly, the music. Often annoying, but some are really qualitative. In fact, I have quite a lot of game MP3's in my WinAmp list. My absolute favourite game music?
- Command & Conquor / Red Alert (Frank Klepacki)
- Castlevania IV (snes). Really, for a cheesy 1-bit sound card, it is awesome classical music
- Crusader No Remorse (techno-house-rock made on a Commodore by Andrew Sega)
- Almost forgot, Little Big Planet. Makes me as happy as a baby

How about yours?

Sunday, October 17, 2010

Another take on lightshafts

The original two-month planning for that movie was a little bit optimistic. But if you ever did some programming you’ll know that is a common mistake. Especially when working on something new. Someone asks how many weeks you need to create some shitty importer tool. “Weeks?! Minutes!”, you shout proudly. But before you know you wasted 24 hours already on some stupid pointer bug, or due the ever changing list of demands. Unforeseen sequences.

But I can pretty safely say now that the movie will be finished somewhere half November, and that even includes some margin. Unless... another pointer error will strangle me. Or recording with FRAPS becomes a pain in the ass (I'm kinda worried about the framerate, full-screen & sound). I won't have each and every point accomplished as I would like, but as a programmer you'll have to learn to make decisions and finish things, otherwise you'll stay in the early-alpha-development stage forever. There is always a bug, something that could have done better, or a new feature you would like to add last minute... such as improved fog & lightshafts…

I stumbled over this nVidia paper, and with my love for the graphical aspect of game programming, I couldn't resist. Improved volumetric light/fog is definitely one of the most important effects I'd like to realise soon. It can't hurt to have a quick peek right?

An older shot with a crunchy tobacco filter. The volumetric light here is just an additive cone. Still, this effect can dramatically improve your scenery. Don't mind the pink picture btw, that is the dynamic ambient lightmap (one of the things to replace in 2011)

For the info, volumetric light is the effect you typically see when light falls in a dusty room. You can't see light rays of course, but when the air is filled with tiny particles such as dust or smoke, the light will collide and scatter a little bit. Fog is basically the same effect, light collides with (local) clouds of condense. However, graphical applications do not have this effect by default, as lighting is usually calculated on the receiver surfaces (walls, floors, objects --> polygons), not tiny invisible particles floating in the air. We could fill the space with millions of little dots and apply lighting on them, but that would be tedious.

So far I considered two ways of doing volumetric lighting (except from simply modeling a lightshaft trapezoid with a dust shader into the scene). First there is the traditional pile-of-quads method. Wherever you expect lightshafts, you can create a "volume" of receiving geometry. Not tiny particles, but a (dense) stack of transparent quads that lit up where the lightrays intersect. By mapping the shadowmap of a lightsource onto those quads, you can get quite accurate results. However, the downside is that you need to put damn quads everywhere, which is especially nasty if the light can move around. Think about a day-night cycle where the sun shines into your building from many different angles. On top, having lots of transparent quads also takes its toll on the performance. Using less quads can fix that, but that would give jaggy results again.

Another method is a post effect, using blur streaks. This is what my engine currently uses.
1- Render a darkened variant of the scene. Only lightsource(flares) and the skybox are colored, the rest is black.
2- For each screen pixel, step over the screen towards the lightsource (2D) position. When passing bright pixels, add them to the result pixel (based on distance and strength settings). This creates streaks around bright the spots you rendered in step 1.
3- Blur the results to smoothen.
4- Render the result on top of the normal scene (additive blend).

This can give quite good results for a cheap price. But there is one serious drawback though, you'll have to see bright pixels (skybox or the lightsource), otherwise there is nothing to smear out either. It works for open area's, such as the skylight falling in through the treetops. But for hallway's with windows on your left or right side, it's not working unless you can actually look outside through the window.



Here's a third method. Not really new either, but maybe due hardware performance limitations not used that much yet. It's also a post-effect, that "only" requires a depth buffer of the scene (from the view perspective) and a shadow-depth map from one (or more) lights. I already had these ingredients so that shouldn't be a problem. Next step is to fire a ray into space for each screen pixel. You know how far it will travel thanks to the scene-depth buffer. Now take a (big)X amount of samples alongst the ray path towards the ray collision point. For each sample, check if it intersects a light volume. Or other types, such as local fog volumes. The more samples the ray hits, the more dense the shaft/fog or whatever kind of volumetric effect for that screen pixel.

Don't mind the crappy picture. At least I can understand it.
Intersections can be checked in a similar way as shadowMapping. For each raystep, calculate the world position and check if it can be “seen” by the light by comparing depth. Or check if the ray position is inside a fog volume. The theory is pretty simple, however, making this as a practical realtime effect is another story. I can't tell you all the implementation details, as I simply haven't coded it yet. Besides making a fast raycasting pixelshader (plus some additional shaders to blur and upscale from a smaller input buffer), I'm facing another problem here... Environment awareness.

Demo programs are usually fixed on a simple, small static location. But in a large complex environment, the settings can differ for each square meter. Some hallways are filled with dust while others are clean. One sector has slight orange fog while its neighbor needs thick blue smoky fog.

I was thinking about encoding “particle info” into a 3D texture, but unless you’re making dynamic cascaded textures like Crytek does for its new ambient technique, you’ll run into problems as well. 3D textures can’t be too big, and as you move in a big roaming world that texture needs to move with you as well. I think I have some easier / cheaper tricks to inform the shaders how to render shafts & fog, but I need some more time to work it out. How about… two months?

Finally some quality TV. In case you wondered what I did last week; the curtains somewhere in the background. Blurred out by much Depth of Field.

Monday, October 11, 2010

Art: The Curled-MonkeyPoo-in-metalfoil, symbolizing The great Misunderstanding

Although this is mainly a "technical" blog, it's also an attempt to get in touch with the creative side. No idea how, God works in mysterious ways, and so does the artist. Hopefully some concepts here can make us friends one day. But let's talk about some recent developments in the (Dutch) art scene.

As we all know, the Western World is currently living in a crisis. I always react a little bit giggly when hearing that word. Only one car instead of two, maybe forced to find a new job, no vacation in 2010, less steak & chocolate pudding... Crisis, don't make me laugh. For people that live in a pool of mud or somewhere in a thirsty war-torn desert, this is still pretty much a paradise I guess. At least here in the Netherlands.

But seriously, to prevent worse in the future, we'll need to take measures. No doubt about that, and the same goes for Holland. Finally, after having endless annoying discussions whether to have a ministry with our blonde (anti Islam)Wilders or not, there is finally a formation again. My personal opinion about this? Don't talk but work already dammít. If mister Wilders is really that evil, the cabinet will explode (or implode) soon anyway. We're not living in 1936 anymore. I think the way how Wilders leads his debates about Islam and failing integration is not really constructive indeed. Whether you are right or not, it takes charm and tact to convince. Not to convince his sympathizers (Henk & Ingrid as we like to call them), but to convince the Muslim. Then again, I'm not a fan of his opposition either. They accuse him of creating two division, but their continuous bash on Wilders (and all his voters), the comparisons with Hitler & 1930’s, and marking every piece of criticism as xenophobe or fascism is just as stupid as well. I think I'll just immigrate to Jupiter.

Off topic, my work for this week was animating this ugly guy. And to prevent breaking your legs when taking the stairs down. Due gravity & speed, the player would get launched off the stairs instead of nicely, carefully, taking the steps. Stupid physics.

Anyway, we'll have to scrimp. Crisis and stuff. Holland also has to deal with a large group of 65+ people that has to be taken care of by a far smaller younger generation; me & friends. You can spend each euro only once, so decisions have to be made. One of the departments that has to hit the brakes with this new cabinet, is the art & culture sector. I have no idea how much exactly, but currently quite some tax money is used to stimulate art so they are definitely going to feel the whip. As always, the reactions I read on the web are either 100% pro or 100% anti.
"Without culture we will be bombarded back to the dark age again, SHAME!!!"
versus
"Why the hell do we need a metal statue of a... pipe-horse-birdman-thing in our park anyway?!!"

Art. Or Artists. What can I say about them... I'm not talking about your nephew doing Flash animations, but dreadlocked hippies, having sex on a white canvas and (trying to) sell it for 7.500$ under the name "Les aventures d’Emanualle". Throw a bucket of blue crap on a wall and call it “green”. Make it controversial, make it weird, make it art. Make it in such a way that 9 out of 10 people doesn't (want to) understand it. So it keeps preserved for the "intellectual"… Is that a correct description of the artist? Please don’t get offended yet!

Wrong prejudices or not, fact is that most people don't understand art. And when looking at some artists, I'm not surprised that there is this image of lazy, weed-smoking anarchists, hiding their lack of talent/work-ethic by producing incomprehensible "art" once in a while. When not sleeping. When not too stoned. When not demonstrating against “The System”. Then again, I also know normal, hard working, down-to-earth artists. And although I'm not really familiar with the masterpieces or their creators, I can recognize and respect good art. Some of the greatest ideas just require some twisted, not so common mindsets. Often carried by, let's say unique creatures of God. If it takes a joint, dreadlocks, waking up at 11:30 and yellow striped pants, then so be it. The world would be boring if everyone looked and acted the same as me anyway.

The challenge with this particular animation was to keep the monster inside the hallway, not intersecting the walls too much. I'm reading a little bit about Inverse Kinematics right now, but that's future stuff.

So, economizing, good or not? The prejudices about lazy artists are certainly not always true. But that arises some questions. Does a talented person who works hard really need all that subsidy anyway? A good friend, studied at the art-academy, has his own business in logo design. Without getting a single tax penny. Same goes for his girlfriend. Another girlfriend is studying jewelry craftwork and plans her own little business. Again, without getting help, she pays everything herself (of course). And when looking at my own project, which is sort of an artistic -attempt- as well, is receiving jackshit either. I'm not saying my project actually is Art with a capital A, but honestly the same thing can be said about lot's of (subsidized) artwork as well.

I dare to say about 20% of the artwork is made by talented hands/minds, the remains are not that good or simply not art worthy. From desperate housewife art to haters of The System who completely lost it. The scope is wide but the artistic scene as a whole has to accept the fact that most people do not recognize Art in a piece of bended metal pipes (symbolizing the creators mind), a canvas with poop, or puppeteers doing a 48 hour jerk off session. Call us stupid or narrow-minded, but we are also your client. If Phillips or Sony comes with a brilliant new product, we still decides whether it sucks or not by buying or leaving it. We are afraid artists can't make a living anymore when the subsidy stops, but HEY, your local Mexican restaurant stops having an income as well if no one digs his shitty taco's. If 95% of the city residents does not like or understand the new "Love-with-a-twist-inBronze" monument statue, you can ask yourself why we should want such a project in the first place...

But before I'm making enemies with each and every artist, Hold on. Sony Blu-Ray players or taco’s are a different branch of product than art is. Of course most people won't understand art. Because most people don't know what it takes to create it, or how to look at it. From my own experience at school, mostly only 3 out of 25 children were creative. They spend their time drawing, the others went playing football and fighting each other on the schoolyard. No wonder the work of an artist is underestimated and often not appreciated. Struggling with the simple minds, getting flamed by (religious) people who got hurt once again, trying to explain the average Joe what you just made... That must be tiring.

Me personally, I don't think I should pay the household of lazy untalented artists. And when your community buys a new artwork again, then at least let the citizen decide a little bit as well before throwing another hundred thousand dollar for a piece of work that might not be seen/appreciated. But the problem is that this subsidy-reduction also blocks potential talent. People often forget what kind of influence art has in this world. The street view, architecture, music, logo’s, clothing, posters, cabaret, literature, opinion makers, critical questions or cartoons covering political dilemma’s... We also forget that becoming an artist requires a long bumpy way. Would the talent of jazz-band X ever be discovered if they didn't get a proper chance to deploy their selves? Would famous painter Z ever started seriously with this career if the chance of making a living with it is 0,01%? Artists require podia and exposure, a kick-start. Many probably won't come much further either, but some... As my unsubsidized friend said, the risk with stopping the money flow is that "being an artist" becomes something for the rich people who don't have to care about making an income in the first place. But above all, we shouldn’t want losing the creative part in our society.


If we have to economize, I'd say create some sort of probation. If your productivity equals ZERO, If you still don't sell paintings after X years, if your music still doesn’t come any further than the local pub, playing for 6 drunken truck drivers, a horse, and 14 year old groupie girls. Then maybe it's time for a reality check, and stop the investment. Sorry, but you just don't got it. Next. Life & business is hard, but that also counts for lady GaGa wannabes, Sony, the Mexican restaurant, and old mcDonald’s farm after having a bad year.

Other than that, I would like to advise artists not to rely on subsidy in the first place. If you can have it, good for you. If not, though luck, but why not using that creative right brainpart of yours to think about a solution instead of complaining? Like... having a normal job for 3 or 4 days in a week, and doing art as a side job? And then just look how it goes maybe? I would love to spend my 40 working hours on this project as well, but I'm afraid some people will get hungry then. It doesn't matter though. If you have REAL ambitions, your drive and heart is what counts in the end. Nothing can stop that, neither your wallet.

Sunday, October 3, 2010

Stitching Frankenstein

This week I had to use my creative "skills" instead of programming. Except for programming “chest/head rotation” instead of rotating the belly. When the player bends over (looking down with the mouse) it looked as if… he was ready to get jumped on, if you know what I mean. But worse, your head would also stick through a wall when bending over while standing close to it. Anyway, the monster model needed a few adjustments, and a skin. Most of the work wasn't drawing or sculpting though. Bringing the high-poly object back to a low poly object gave me a big fight once again.

Besides bringing down the polycount alone, I also had to fix the UV seams. For those who are unfamiliar with UV coordinates; to get that nice texture on your model, you'll have to make a flat 2D "unwrap" of it. Each polygon should have a place somewhere on the 2D canvas. In many cases without overlapping the space of other polygons. In practice that means each vertex gets a 2D "UV" or "texture" coordinate as well, besides its 3D position. These coordinates tell where the polygon pixels can be found on an image.

There are several methods for making such an unwrap, including doing it fully automatic. The trick it to optimally use the 2D canvas. The bigger the area the polygons spread, the more pixels it will use in the result rendering. Blabla, one of the difficulties with making unwraps are the seams. Imagine you want to texture a cylinder model. You can roll it out as a carpet over the 2D canvas, but somewhere you'll have to make a cut. The green line on the fantastic model below shows an example.

Two difficulties arise here. First, when drawing at the right side of the Cola texture, you'll have to make sure the left side of the texture connects well. If the right side is slightly more red for example, you'll see an ugly obvious color transition on the 3D model. Luckily this is automatically solved for you when painting in 3D. Programs like Sculptris or ZBrush have some really good tools that let you directly draw onto the 3D model. So you don’t have to care about where to draw on a 2D canvas. Afterwards you can do the fine detail stuff with a painting program. But, stay careful at nearby by the seams.

Another issue with seams are, well, the seams. A typical 3D mesh contains groups, (triangulated) polygons and vertices. Each group has a bunch of polygons, each polygon refers to (3) vertices, and finally there is the vertex itself which is usually a structure of 3D vector data: 3D position, 2D uv coordinate, 3D normal vector, and maybe some other stuff such as weights or tangents. Here lies a problem... If a vertex has only one UV coordinate... then how the hell can it be on 2 places at the same time (see picture)? Vertex #23 (just a number) is both on the left AND the right side. When importing this model, big chance the renderer will chose either 1 of them, causing a big texturing error. The boob docter doesn't stich the wound with white tissue either when operating a black woman.

With the new upcoming winter fashion for 2010, we got him a new skin.

The answer is to duplicate the vertices at the seam. Make a copy with exactly the same 3D data, except for the UV coordinate. Voila. But how are you going to do that on a high-poly model that has a half million vertices aye? Luckily I found this handy plugin for Lightwave that does this automatically for you:
www.mikegreen.name/ <-- check plugins, "Split by Seams", thank you Mike!

The only bad news was that it took the plugin for about 3 hours to fix the high-poly model. And then to find out you forgot something. Oops, just a moment. All in all, bringing the high-poly model to the engine was quite a ride:
1.- Model mega high-poly.
2.- Reduce to ~200.000 polys or something. I'll had to do this because my 32 bit machine would ran out of memory with some of the plugin operations on the mega model.
3.- Make an UV unwrap.
4.- Draw a skin on the 3D model. Generate a high-res texture (2048 x 2048 or something). You can always downscale that texture later on.
5.- Export the high poly model to something Lightwave can read.
--------- part deux -------------
6.- Import the high-poly model.
7.- Make a low-poly mesh (3.000 for example) with a plugin like these:
SimplifyMesh
DO NOT FIX THE SEAMS ALREADY! Or you will get gaps and holes at the seams in
the low-poly mesh.
8.- Fix the seams on the High AND the Low poly meshes with a tool like :
8.B- Wash your car, brush your teeth, do something nice with your family, and be patient.
9.- Generate a normalMap by looking at the high poly mesh with a plugin like Mike's UV seam fixer.
generateNormals
10.- Send the high-poly model to your grandma in a postcard, dump it.
11.- Now you have a textured low poly model, with a cool normalMap.

Hey, I'm not a really good modeler, but knowing your tools can bring you quite far!

Funny. After quickly putting this head together in about an hour, I didn't really know what to do with it... sick, or just trash? Then my girlfriend suddenly woke up around 02:00 and was shocked to death by my laptop screen. Haha, but she was really worried about the crap going on in my head, especially at that time. What better compliment can you get when trying to make something, boo, scary?