Saturday, December 29, 2012

Overwatch: 2012

Heh, that last post wasn't scheduled but I had a real urge to write that weird dream down that morning. But what I really wanted to write, before 2012 ends (on the normal calendar, not that stupid Maya one), was sort of an overview.


Bummer, no new demo was released this year, neither any other spectacular news. Symptoms of yet another overambitious slowly dying project? No, and those demo’s sure will come in 2013. Yet I can’t say there was a lot of progress this year. And if we as a team really want to make a game, or even a playable demo prototype, we need to step on it. More manpower, more commitment, and more sweat and direction from my side. Instead of just saying “when it’s done”, I think we owe readers of this blog, and gamers that just like to see this horror game happening one day, insights in the development process. Because waiting sucks.

To be Busy, or not to be
---------------------------------------------
All right, what happened. The year started good with a new demo movie, and unexpected attention from several game websites, end 2011. New people joined, and the plans for 2012 were made; doing more (conceptual) design & making yet another demo. But, this time the demo should also be part of the actual game itself, so it can be used to make a start on the maps & gameplay implementation as well. As for the technical progress, usually I just make whatever is needed for a demo. Varying from new graphical techniques to sound support or a UI interface. I’m the kind of guy that needs visual input to work with, I won’t just go coding an AI system without having a cool animated monster. An “Event-driven” workstyle.


Sounds like the good ingredients. But even good plans, skills and personnel, are still no guarantee to accomplish your goals. The magic curse word this year, and probably recognizable for any non-paid job: “(very) busy”. If you can’t help fixing the neighbor's car because you are busy, you say you’re busy. If you can’t help because you want to stay in bed, you text message you are busy. If you didn't do shit because you spend the last week on beating Fifa 2012, you mail you were busy. Or you don’t mail / call / reply at all, because you were too busy to do just that. “Busy” is like a Star Trek reflector shield to bounce off work. And obviously when working with people at the other side of the world, it’s hard to verify if the busy-argument is valid or not.

Most people I know, aren’t as busy as they say/think. And yes, with multiple jobs, a family, friends and a house I know a little bit what I’m talking about. And no, I don't consider myself very busy. “Priority management” is what’s really going on. And many people just prefer to spend their free time on watching TV, playing games, sports, get drunk, sleeping or whatsoever, rather than doing difficult stuff. Such as making game content. And of course, there is nothing wrong with that. Every person should do whatever he damn pleases in his free time. It would be different if I was paying salary, but I’m not so whether I like it or not, I’ll have to accept. At the same time though, I wonder why people offer help if they’re really that busy.
Just a sketch for one of the rooms being made (and not finished yet) this year.

Sure, making time for a project is harder than it may sound. First, the quality-bar is raised pretty high, so usually artists that join are talented… which means they also make their livings with their talent. And since artists often work on a freelance basis where the client wants his product ASAP, T22 will get on a second place as soon as things get stressy. And maybe… maybe the artist wants to touch something else than a Wacom tablet when there are some free hours finally. That brings T22 down on the priority ladder.

Another thing I have to understand, is that T22 is not their baby-girl project. In my case, T22 gets priority over TV, gaming, shaving, eating and sleeping, because it’s my favorite waste of time. But an outsider doesn’t have this bond with the project of course. Most people that offer services, would like to sharpen their 2D/3D skills, or just like horror games in general and found the T22 movie cool. But in order to work like a horse on something, you need to get triggered. A horse you can spank, but with humans it works a bit different. Any idea why your boss is asking the same “when it’s done?” questions over and over again? To remind you, to trigger you. By nature, humans are sort of lazy. If nobody guides at your work, you probably do the fun tasks first and delay the boring stuff for later. Or you just play Mine Sweeper until 5 รณ clock. In the case of T22, I can’t spank, neither threat by reducing salary or firing people. Triggering should be based on giving fun, satisfying tasks. Something the artist can learn or be proud off.

But that’s not so easy either. Like any game, much of the T22 content is made of boring barrels, furniture, wallpapers, junk and corridors. It’s not that we’re making monsters, intestines and never-seen-before scenery all the time. Besides, even so, making a game is not exactly satisfying as it requires a LOT of energy and patience. Even relative simple assets can still take hours before the mesh and textures are well done. And then you still may have to wait for others before your work can shine in a polished screenshot or movie. The audio guys made a bunch of awesome soundtracks recently, but it will take a while before they can hear them in a finished room. Not so motivating to keep doing tracks. This is why smaller (Indie) games are far more realistic to accomplish. Satisfying results are supplied faster, which is the fuel behind any charity project.

Then last but not least, I think newcomers are often disappointed after a while. Quite some people joined T22 this year, but more than half of them also left. You’re not stepping into a spectacular horror-ride. No, you got to help me pushing the car first. An ugly little car stuck in the mud on a a hill. Because the team is small and “busy”, there is little momentum. Neither are there 100%-coolness-development-kits such as UDK to start working with. And neither will you get access to the game story, or more interesting positions such as becoming a lead-artist. You will get access eventually, but first you have to prove yourself. We work with a “quarantaine” system. Sadly, many people that join just aren’t motivated, not skilled or change their minds soon. So of course, I won’t reveal all the secrets until I can “trust” someone. Which is hard enough already in a virtual relation. It would help a lot if we could meet and see each other. And I’m not talking about Skype, but working on the physical same location. All these missing factors and reasons given above, will degrade T22 further on a person’s priority ladder. Below visiting grandma on Sunday.
Though this is an UDK shot, Entrophy & Vertex Painting and a bunch of textures like these were a good addition to the engine. They allow to vary the surfaces, such as the worn brick spots here.


2013 Battle goals
---------------------------------------------
Right, that’s an explanation for the slow progress. But more interesting, what we’re gonna do about it? Well, I have a bunch of plans, although it still requires cooperation. You can plan all you want, but without help it’s still worth nothing. Therefore, I try to make a few simple to understand goals that should be doable within the next 6 months. Goals like “finish the game” are too vague, especially for newcomers that have no clue what the size and plans of your project really are. Instead, try to make goals the artist sees and thinks ”Shit, I can do that!”. And also, focus on priorities. Don’t plan too much. As said, a year is shorter than you think, and due the fact that T22 will be relative low on people’s priority-ladders, it’s just not possible to finish as much as you would like. And changing plans all the time is a sign of bad management as well. Keep it real.

Below you will see the 2013 goals. Divided over sub-teams. Yet one more reason for the slow progress, is that we all have to wait on each other. So this year, I want to do things more in parallel. Put a few persons on task A, a few others on B, et cetera. Anyhow:
3D Team A - Detail (Julio, Federico, Diego, Colin):
• Finish Demo4
• Continue with props & textures for Demo3 & more

3D Team B - Mapping (Jonathan, Rick, ...? ):
• Find 1 or 2 mappers to create empty maps on a more global level
• Make (global) floor1

3D Team C - Characters (Robert, Antonio, Julio, ...? )
• Find 1 3D character modeler
• Finish Monster(s)
• Finish Player
• Animate them

2D / Design Team (Borja, Pablo, Federico, Diego, Rick, "UI" Pablo):
• Continue game design (drawing & discussion)
• Make mood drawings for each game section
• Floor 1 drawings
• Make an UI

Audio Team (David, Carsten, Cesar):
• Gun sounds / impacts
• Demo4 audio
• Monster sounds
• Player / UI sounds

Coding team (Rick):
• Upgrade Global Illumination lighting
• Upgrade (skeleton) animations system
• Enhance player controls
• Implement AI monster basics

Publishing ( Brian, Rick, others ):
• Publish Demo4
• Pascal Game Development magazine interview
• Open a WIP thread on Polycount
• Update website


2013 Prime Tactics
---------------------------------------------
That's about 2 to 4 goals per person. Of course, the goals above can be split further in subtasks, but I won’t give you the exact details as it would spoil things. But let me explain some of them. Most important probably is to fix the “Chicken & Egg” issue. We need talented & devoted boys/girls to get the 3D tasks done. It’s not that our current guys can’t make a room, player model or UI, but if they keep as busy as last year, it just won’t happen. So, looking for extra people makes sense. However, if you pay peanuts, you get monkeys. I don’t even have peanuts. The only way to reach your people, is by making something really nice, and publish it wisely… but it requires talented people in the first place to create that “nice thing”.

What I’m saying, the current team should try to finish Demo4 (which isn’t that big, but contains a few complicated assets). When it’s done, we publish it. Sure, we did that two times before as well. It delivered some extra manpower, but not as much as I hoped for. I think we need a more aggressive strategy this time if we really want to push forward. So:
• Pitch the demo to a popular games magazine in the Benelux (Netherlands / Belgium)
• Possibly do the same thing for Spain
• Article in the Pascal Gamer Magazine
• Start a WIP thread on typical 3D gathering sites such as Polycount
• Let the artists earn a bit of money by making some of their assets sellable on 3D Webshops. Meaning they can put some of their work on Unity for example.
• Spread flyers above North Korea

Why focus on Spain and the Netherlands? Well, I’m Dutch, and 80% of the team are Spaniards. Not that we want to discriminate the rest of the world, but I think it just works better if we have “clusters”. A few of the Spaniards live in or nearby Madrid and actually meet each other. That surely helps to motivate as they can talk and help each other. A team isn’t just based on people doing the same thing. There must also be an emotional bond with your colleagues. But how to create one with people you never saw, speaking a different language? Most of us can write English pretty decently, but making a joke or having an interesting conversation other than T22, is hard. This isolates members from the team, as they only communicate with me mostly. Not exactly how a team should work.

So, creating sub-teams with geography in mind, might work better to start with. Other than that, I just feel the Netherlands might welcome a project like this, as we don’t have a booming Gamedev scene here. The other points mentioned above are also to get more attention. But again, it requires work from the artists first before we can paste interesting screenshots or drawings on forums or in a magazine.

Let’s see, what was more on the list? Making an UI. Requires an UI artist obviously. We actually found a guy who would like to help, but so far I didn’t hear much. You know… busy. I’ll give it some more time, and otherwise we may have to look for someone “less busy”. Then we had “game design” and “making maps on a more global level”. What does that mean?


Further, “Global mapping”. What the hell does that mean? We can keep making demo’s forever, but at some point we want something playable too. Not just for you, but also for ourselves, internally. It would surely boost the moral if artists can see their stuff while playing the game, instead of receiving screenshots from me. But to get something playable, you need a bunch of maps, at least 1 or 2 monsters, a player and game rules as well. Not too long ago, I started “Wiki tasks”. We have a private Wiki where I write about pretty much everything in the game. Story elements, how a certain section or room should look like, if the player can jump or not, and so on. Writing this is a dynamic process. Some elements have to be playtested first before you can confirm them, and in other situations you may want concept art first in order to decide what looks hot or not. It would be cool if artists would throw drawings and ideas in the mailbox on their own initiatives, but as said, you need to trigger them. So, each one or two weeks I point a few of the artists to a specific Wiki page. Then we discuss that thing, and eventually generate tasks out of it, such as making conceptual drawings. When done, I update the page with pictures and a final description. By basically forcing a task each 1 or 2 weeks, it keeps the discussion forum a bit alive. Although… you know, busy.
Global mapping. It looks like shit, but the goal is not to make something good looking (yet). The goal is to have a room you walk through, use for the game, and decorate later.

Making maps on a more global level means one or more guys should start making empty versions of the rooms, corridors, stairs, or whatever. In my experience, making the room itself isn’t that much work (even I can do it as long no super architectural tricks are used). What really eats time, is making the textures, decals, and props (furniture, devices, boxes, decorations, …). Even a simple room quickly generates dozens of props to make. So far I made a bunch of empty (demo) rooms, and then we tried to fill them. But filling takes so damn long, and the playground that is needed to actually play a game (= explore, get chased by a monster, solve a puzzle) keeps very small. What if we would focus a bit more on making this “playground” without beautifying them directly? It won’t produce nice screenshots for this blog, demo movies or magazines, but it does provide one of the most important ingredients for a playable game.

A nice side note I should make about that, is that making these maps isn’t so difficult… meaning that less experienced people could be “hired” for that. So far I filtered out quite a lot of offers, as I found the quality not good enough for T22. Sounds arrogant maybe, but I just don’t want ugly graphics, and many (beginning) artists fail at producing good textures. But obviously, that filters out most of the help I can get as well. Talented people have paid jobs, so they don’t notice T22, or only have very little time for it(you know… busy). Students on the other hand might be more motivated, as they want to learn a lot, and have more time. Maybe there texturing or hi-poly skills aren’t good enough yet, but those skills aren’t needed that much for making the raw maps.


2012 achievements
---------------------------------------------
I feel this post is getting too long again. And maybe too negative. Let’s finish with the good stuff from 2012.
• Found several 3D guys. Unfortunately, most have left or are inactive (you know… busy) but guys like Federico and Diego are doing their best.
• Promoted Federico to a lead artist
He has skills, teaches 3D students in his daily life, and keeps in touch with me. That’s what we need.
• Found an animator, Antonio
Now I got to make a FBX file importer (mainly to get the rig & animations). If you are interested in making a FBX importer DLL or writing export scripts for Maya / Blender, please contact me. I hate writing those.
• Found an UI artist, Pablo
• Cesar offered help on the audio
• David and Cesar produced several horror tracks recently
• Borja and Pablo joined and are making 2D artwork
• Started weekly Skype meetings
• Started the Wiki design discussions with Federico, Diego, Borja and Pablo
• Which delivered some nice plans for the T22 exterior, to name one thing
• Made most of the Demo3 and Demo4 environments, as well as some surrounding areas. Still got to beautify and fill them though.
• Made assets such as floor and carpet textures, an (movable) elevator, some furniture and closets, and a gun.
• Implemented an API, and entity system. Allows to freely program interactive objects such as devices, guns, monsters, doors, or whatever you can operate.
• Improved SSAO, added RLR reflections, improved HDR coloring & eye adaption and fixed a really annoying bug that has been making the graphics slightly blurry the past years.
• Implemented AVI (video) file support to make animated textures
• Implemented Compute Shader (OpenCL) support
• Implemented sort off Voxel Cone Tracing for GI lighting
• A whole lot of other little optimizations and bug fixes


All in all, it’s not that we slept a whole year. But for the show, only finished products matter. Hopefully we can finish what we started soon in 2013, and get the extra nitro injection to lift the project further where it should be.
My personal biggest achievement for this year(and to finish in 2013): realtime GI. Spend hundreds of fucking hours on it. The GI topic went by several times already, but so far it never resulted in a technique I realy liked. Too slow, too ugly, too blehh. But this time, I think we're finally getting somewhere. To be continued...

Saturday, December 22, 2012

Waiting room of Death

Inside a medium sized hospital room with large windows at its front, we were observing them from a small distance. A few rows of old couples were sitting next to each other in armchairs behind some sort of wooden cabinet with a monitor in it. Embracing each other, calmly waiting for Eternal Sleep gently lifting them nearly simultaneously out of their bodies. Nurses, young girls still with a life ahead of them, were quietly walking between the chairs. Hands on their backs, sometimes stopping for a second to look at the monitors or to make sure their patients were comfortable. Sweet whisper between couples would fade out, as they peacefully ended in the arms of who they loved. A strange serenity in this cold greyish looking room.

It was our turn. We took place in one the chairs. No idea what the monitor in front of us was really doing. It didn't matter. I embraced my girl, and closed my eyes. Realizing these would be my last words, I told her I loved her, and thanked her for all those years being with me. She didn't respond but firmly closed her arms around me. There was no need to talk really, we both know what we mean for each other.


I tried to let it just come. With my eyes closed and mind sort of cleared, I only felt the warmth of my girl. My friend, mother of my daughter, my better half. Who would go first? I still heard the soft footsteps of the nurses a bit, meaning we were still awake. As I waited and tried to get in tranquillity, in a numb state, I got scared. Without saying anything, I'm trying to say farewell to my girl. As well as trying to take leave of my own "me". Everything I remember, everything I learned this life, everything I am. It would vanish. But I don't want to...

It's getting completely silent and hollow in my head. Is Agnes still there? Not sure if I still feel her. This is it. They say dying is a peaceful process. Getting released from this world, entering a new reality. But I don't feel it that way. It's dark, and I'm scared. Where am I going to? Is there even a place to go to? Would I ever see Agnes again? Would I ever become as happy as I was in this life again? ... It's quiet and dark, my thoughts and concerns are stopping ... is this ... being dead?



I feel, with tears in my eyes, that I'm laying with Agnes, my girl, in my arms. In my bed. It still takes some seconds before I finally realise that I can open my eyes. Maybe not dead, but I'm in paradise.

Boys and girls, Be happy with what you have, love the ones around you, have peace with yourself. I wish you a warm Christmas.

Thursday, December 6, 2012

3D survival guide for starters #4, Light/AO/HeightMaps

Arh! A bit later than it should. And I don't have a good excuse really. Busy as usual, and forgot the clock. Or something. Doesn't matter, let's rush towards the last "Starters Guide" post, and smack your ears with a few more texturing techniques: Ambient Occlusion mapping and two old beautiful ladies, miss HeightMap and LightMap. The good news is that these are easier (to understand) than normalMapping. So lower your guard and relax.
Tears. One of my first lightMap generator results, including support for normalMapping.


HeightMaps
=====================================================
This is one of those ancient, multi-useful techniques that didn't die quite yet. If I were a 3D-programming teacher, I would first assign my students to make a terrain using heightMaps. And slap them with a ruler. The HeightMap image contains.... height, per pixel. White pixels indicate high regions, dark pixels stand for a low altitude. Basically the image below can be seen as a 3D image. The width and height stand for the X and Z axis. The pixel intensity for the Y(height) coordinate.

An old trick to render terrains was/is to create a grid of vertices. Like a lumberjack shirt. Throw this flat piece of lumberjack cloth over a heightMap, and it folds like a 3D terrain with hills and valleys. That's pretty much how games like Delta Force, Battlefield 1942 or Far Cry shaped their terrains. And asides from rendering, this image can be used to compute height at any given point for physics & collision detection as well. Pretty handy if you don't want your player fall through the floor. Another cool thing about heightMaps is that any idiot can easily alter the terrain. Just use a soft-brush in a painting program, and draw some smooth bumps. Load the image in your game, and voila, The Breasts of Sheba. In older games it was often possible to screw up the terrain by drawing in their map data files..
On the eight day, God created height.

Yet heightMaps aren't as popular anymore, as they have several issues. First, heightMaps provide only 1 height coordinate per grid point, So you can't make multi-store or overlapping geometry, such as a tunnel or a cliff hanging over your terrain. Second, your image resolution is limited as always. Let's say you draw a 1024 x 1024 image for a 4096 x 4096 meters (~16 km2) terrain. You can make your terrain grid as detailed as you want, but there is only height information for each (4096^2) / (1024^2) = 4^2 meters. Certainly not detailed enough to make a little crater or a manmade ditch. Games solved this problem a bit by chopping their terrains in a smaller bits, each with its own heightMap to allow more detail. More complex shapes such as steep cliffs, rocks, tunnels or manmade structures would be added on top as extra 3D models.

So with the ever increasing details and complexity, games switched to other methods. But don't just throw away your heightMaps yet! More advanced shader technologies made The Return of the HeightMap possible. To name a few:
- normalMap generators often use heightMaps to convert to a normalMap
- Entropy
- Parallax Mapping
Entropy? That's a tropical drink? Not really, but it is a cocktail for making your surfaces more realistic with rust, dirt, moss or layers caused by influences such as nature. Can't really explain it one word, but the way how a surface looks often depends on environmental factors. If you take a look at your old brick shed in the garden, you made notice that moss likes to settle in darker, moisture places. If it snows, snow will lay on the upwards facing parts of the bricks, not on the downside. Damaged parts are more common at the edges of a wall. Overall status and coloring of the wall may depend how much sun and rain it caught through the years. All these features are related to geological attributes, their location. by mixing textures and some Entropy logic, you can make your surfaces look more realistic by putting extra layers of rust, moss, worn paint, cracks, frost, or whatsoever on LOGICAL places.

If we want to blend in moss, we would like to know the darker parts of our geometry. And here, a heightMap can help a lot. For a brick wall or pavement, obviously the lower parts (dark pixels) usually indicate the gaps between the stones. So start with blending in moss at “low heights” first. The same trick can be used for fluids. If it rains, you first make the lowest parts look wet & reflective. This allows to gradually increase the water level without needing true 3D geometry. Pretty cool huh?
Like cheese grows between your toes first, this shader should blend in liquids at the lowest heights first.

Another technique you may have heard of, is Parallax mapping. I won't show you shader code or anything, but let's just tell what it does. As you can read in the third post, NormalMapping is a way to "fake" more detail and relief to your otherwise flat polygon surfaces. NormalMapping gives this illusion by altering the incoming light fluxes on a surface depending on the surface normals. However, if you look on a steep camera angle, you will notice the walls are still flat really. Parallax Mapping does some boob implants to fake things a bit further.

With parallax mapping, the surface still remains flat. But by using a heightMap, pixels can be shifted over the surface, depending on the viewing angle. This is cleverly done in such a way that it seems as if the surface has depth. Enhanced techniques even occlude pixels that are behind each other, or cast shadows internally. All thanks to heightMaps, that helps your shader tracing whether the eye or lamp can see/shine a pixel or not.
The lower part shows how the shifting works. Where you would normally see pixel X on a flat surface, now pixel Y gets shifted in front of it, because it's intersecting the view-ray.

As for implementation, it's just another image, using only 1 color channel. I usually stuff my height values into the normalMap. Although for expensive parallax mapping, it might be better in some cases to use a smaller separate image. Got to confirm that with the GPU performance guru's though.



LightMaps
=====================================================
Yes. Again lightMaps. I just like to write about them for some reason, and if you're not quite sure what these are, you should at least know about their existence. In post #2, I wrote how lighting works. Eh, basic lighting in a game I mean. To refresh your mind:
- For each light, find out which polygons, objects or pixels it *may* effect
- For each light, apply it on its polygons, objects, pixels, whatever
- Nowadays, shaders are mainly used to apply a light per pixel:
.....- Check the distance and directions between Lamp & Light
.....- Eventually check the direction between lamp, light & camera (for specular lighting)
.....- Eventually check with the help of shadowMaps if the pixel can be litten ("seen") by the lamp.
.....- Depending on these factors, compute light result, multiplied with the light color/intensity and pixel albedo/diffuse and specular properties.

That doesn't sound like too much work, but back in the old days, computers used leprechauns to calculate and carry bits from the processor to the monitor. If you were lucky, your computer had a "turbo!" button to increase the clock speed from 1 Hertz to 2 Hertz, whipping the little guys to run faster with those bits. Ok ok. The point is, computers weren't fast enough to perform the tasks above. Maybe for a few lights, but certainly not for a complex (indoor) scene, let alone realtime shadows. Oh, and neither did shaders exist btw. Although shaders aren;t 100% necessary to compute lighting, they sure made life easier, and the results 100 times better. Really.

Platform games like Super Mario World didn't use "real" lighting. At most some additive sprites or tricks to adjust the color-pallette to smoothly change the screencolors from bright to dark or something. Doom & Duke didn't give a crap about "real" lights either. How they made a flat building cast a shadow? Just by manually darkening polygons. When true 3D games like Quake or Halflife came, things got different though. id Software decided it was about time to make things a bit more realistic. Back then, gamers and reviewers were mostly impressed by the polygon techiques that allowed more complex environment architecture, and replaced the flat monster sprites and guns with real 3D models. But probably most of us didn't notice the complexity behind its lighting (me neither, I was still playing with barbies then). The lightMap was born, and still is an essential technique for game-engines now.

As said, computing light at a larger scale in 1996 was a no-go zone, let alone dealing with shadows or "GI", Global Illumination. Hence GI is still a pain in the ass 16 years later, but I'll get back on that later. Instead of blowing up the computer with an impossible task, they simply pre-computed the light in an "offline" process. In other words, while making their maps, a special tool would calculate the light affection on all (static) geometry. And since this happened offline, it didn't matter whether this calculation took 1 second or 1 day. Sorry Boss, the computer is processing a lightMap, can't do anything now. Anyway, the outcome of this process was an image (or maybe multiple) that contained the lighting radiance. Typically 1 pixel of that image represents a piece of surface (wall, floor, ceil, ...) of your level geometry. The actual color was the sum of all lights affecting that piece of surface. Notice that "light" is sort of abstract. You can receive light from a lamp or Sun, but also indirect light that bounced of another wall, or scatters from the sky/atmosphere. The huge advantage of light was -and still is!- that you could make it as advanced as you would like. Time was no factor.
A "LightMap Generator" basically does the following things (for you):
1- Make 1 or more relative big empty images (not too big back then though!)
2- Figure out how to map(unwrap) the static geometry of a level (static means non-moving, thus monsters, healthkits and decoration stuff excluded) onto those image(s).
3- Each wall, floor, stair, or whatsoever would gets its own unique little place on that image. Depending on the image resolution, size of the surface, and available space, a wall would get 10x3 pixels for example.
4- For each surface "patch" (pixel), check which light emitting things it can see. Lamps, sky, sun, emissive surfaces, radioactive goop. Anything that shines or bounces light.
5- Sum all the incoming light and average it. Store color in the image.
6- Eventually repeat step 4 and 5 a few times. Apply the lightMap from the previous iteration to simulate indirect lighting.
7- Store the image on disc

The hard part is inside unwrapping the scene geometry efficiently on a 2D canvas, and of course the lighting itself (step 4) itself. I won't go into it, and I should notice that many 3D programs or game engines have LightMap generators (or "light baking" tools), so you don't have to care about the internals. The point is that you understand the benefits of using a pre-generated image. Using a LightMap is a cheap as a crack-ho, while achieving realistic results (if the generator did a good job). In your game, all you have to do is:
* Store secundary texture coordinates in your level geometry. Those coordinates were generated by the lightMap tool in step 2/3.
* Render your geometry as usual, using the primary texture coordinates to map a diffuseTexture on your walls, floors, ceils, ...
* Read a pixel from the LightMap using your secundary texture coordinates. Multiply this color with your diffuseMap color. Done.

It isn't all flowers and sunshine though. If you read about the capabilities of a modern game engine, they are probably ashamed to admit they are still using LightMaps for some parts. Why is that? Well, the first and major problem is that you can't just rebuild your lightMap in case something changed in the scene. A wall collapsed, a door opened, the sun went down, you shot a lamp. Just some daily cases that would alter the light in your scene. If each change would issue a 2 minute during lightmap rebuild, your game becomes unplayable obviously. And there is more to complain. I wrote the word "static geometry" several times. Ok, but how about the dynamic objects? Monsters, barrels, boxes, guns, vehicles? Simple, you can't use a lightMap on them. In games, these objects would fall back on a simplified "overall" light color used inside a room or certain area. Yet another reason to dislike lightMaps, is that you can't naturally use normalMapping. Why? Because normalMapping needs to know from which direction the light came. We store an average incoming lightflux in our lightMaps, but we don't know where it came from. Yet, Valve (Halflife2) found a trick to achieve normalMapping sort of: they made 3 lightMaps instead of 1. Each map contained light coming from a global direction, so the surface pixels could vary between those 3 maps based on their (per pixel) normal. That technique was called "Radiosity NormalMapping" btw.

Wait, I have one more reason. The sharpness. As said, the generator needs to unwrap the entire scene on a 2D image canvas. Even when having a huge texture, your surfaces still won't get that many pixels. That explains blocky edged shadows, or light leaks. Even in more modern engines like UDK3.
No sir, I don't like it.

Well, 16 years ago we could chose between either nothing or a lightMap with its shortcomings. People never heard of normalMapping, and the blocky shadows weren't spotted between all other blocky low-poly pixelated shit. In other words, good-enough. But in the 21th century, LightMaps were pushed away. Faster hardware and shaders allowed to compute (direct) lighting realtime. So hybrids appeared. Partially lightMapped, partially realtime lights. Farcry is a good example of that. Doom3 even discarded lightMaps entirely...

To get punished remorseless. Yes, was less advanced but still looked more photo realistic. Thanks to their pre-baked cheap ass lightMaps. Despite a lot of great things, Doom3 (and Quake4) didn't exactly look realistic due their pitch black area's. Everything not directly litten by a lamp simply appeared black. Why did id do that?! Well, let's bring some nuance. Black area's won't quickly appear here on earth, because light always gets reflected by surfaces. See here, "Global Illumination". But the id guys simply couldn't compute GI back then. Not in realtime. Hell, we still can't properly do that, though promising techniques are closing in. So, they just forgot about the non-litten area's. In the sci-fi Mars setting, it was sort of acceptable. But their way of rendering things certainly wasn't useful for more realistic "earth" or outdoor scenes. It explains the lineup using idTech4 (Doom3 engine): Doom3, Quake4, Prey. All Sci-fi space games.
Due the complexity, most games store GI in lightMaps. But Doom3 didn’t use GI at all…

What they could have done, and should have done, is using a good old LightMap (they practically invented themselves with quake!) for the indirect and/or static lights. Of course, that also has shortcomings, but Hybrid engines like UDK (and I believe Crysis1 as well) the best of both, is still the way to go as we speak. Eventually, we will kill and burry LightMaps for once and all as soon as we get a realtime GI technique that is fast, accurate, scalable and beautifull at the same time. Implementing realtime GI is sort of a nerd-sadomachism. Telling you got realtime GI is cool, but the good old LightMap still looks better and is a billion times faster for sure. Although... Crassin's Voxel Cone Tracing comes close (but not without issues)...



Ambient Occlusion (AO) Maps
=====================================================
Now that you know everything about lightMaps, let's give you one more modern technique that spied on this old lightMap grandmother. "Ambient Occlusion", or "AO" maps. Just another texture that can be mapped on your world geometry like a lightMap. OR on a model like a gun or monster using in the same way you would map any other texture on them. AO tells how much your pixel is occluded from global, overall ambient light. It doesn't really look if & where there are lightsources really. It just checks how many surrounding surfaces are blocking *potentially* incoming light from any direction. It's pretty simple, put a pile of blocks on your table, and you will notice inner blocks get darker once you place more blocks around them. Because the outer blocks occlude light coming from a lamp, floor, ceiling, sky, flames, or whatever source.
Since GI or Ambient lighting is already a giant fake in most game engines, the unlitten parts of a scene (thus not directly affected by any light) often look flat. As if someone peed a single color all over your surfaces. NormalMapping doesn't work here either, since we don't have information about incoming light fluxes here (unless you do something like Valves Radiosity NormalMapping). Ambient Occlusion eases the pain a bit by darkening corners and gaps further, giving a less "flat" look, making things a bit more realistic.

Yes, AO is one of those half-fake lighting tricks. The result of what you see in reality is not based on "occlusion factors", but how light photons manage to reach a particular surface directly or by bouncing off on other surfaces. AO skips the complex math and only approximates the incoming light for a given piece of surface. Doing it correctly requires to let the surface send out a bunch of rays, and see if & where they collide. The more rays collide and the shorter their travel distances, the more occlusion. This is stored as a single, simple value. Typically AO maps are grayscaled whitish images, with darker parts in the gaps, corners and folds of your model. Since a single color channel is enough, you can store the Ambient Occlusion factor inside another image, such as the DiffuseMap.


AO construction
There are 3 ways of computing AO. We can do it realtime with SSAO (Screen Space Ambient Occlusion). The idea is the same, although SSAO is even faker as it relies on crude approximations by randomly sampling close neighbor pixels that may be occluding you. Testing correctly how much each pixel on your screen would be occluded -also by objects further away- would be too expensive. SSAO has the advantage that it will update dynamically on moving or animated objects, you don't have to pre-process anything. The disadvantage is that it tends to create artifacts such as a white or black "halo's" around objects, and it only works on short distances. Bad implementations of SSAO look more like a Edge Detector effect rather than "AO".

Another way is to compute it mathe-magically. This works if you only have 1 or 2 dominant lightsources (sun). You can "guess" how much light a pixel catches by looking at its surface normal (facing downwards = catch light from the floor, upwards = catch light from the sky/sun) and some environment information. Fake, but dynamic & cheap.

The third method, as described above, is by baking AO into a texture. Like you would bake light into a lightMap. Again, programs such as Maya or Max have tools to compute these images for you. Once made, you can apply AO on your world or objects at an extreme low cost. The downside, as with lightMaps, is that the AO doesn't adjust itself if the environment changes. Yet AO-maps are a bit more flexible than lightMaps (so don't confuse them!). Occlusion Factors are light independant. Directions or colors don't matter. So you can also bake a map on a model like a gun, and transport it through many rooms with different lighting setups. The AO in this gun image only contains its internal occlusion, like gaps, screws or concave parts being a bit darker. Also for static worlds, you can change the overall ambient color and multiply it with a pre-generated AO map. Not sure, but I can imagine a world like the Grand Theft Auto cities contain pre-computed AO-like maps (or stored per vertex maybe) that tell how much a piece of pavement gets occluded by surrounding buildings. Then multiply this occlusion factor with an overall ambient color(s) that depend on the daytime & weather.



Oops, the post got a bit long again. Well, didn't want to split up again and have yet another post. Want to write about Compute Shaders, so I had to finish this one. I hope it was readable and that you learned something!

Saturday, November 17, 2012

captain's log, 16 november, 1724

Before posting the last "starts guide", let's write about something else for a moment. How about the project progress for example? As you know... when people talk a lot about everything EXCEPT the actual subject, it means they're either hiding something, or just didn't do shit. Unfortunately, we fall in the second category I'm afraid. Not that we didn't do shit, we produced all kinds of shit (literally last week after some belly aches). But not completed yet, or on a too low tempo to report here. To give you an idea:

Living in an elevator
Julio made an elevator cabin object. But not just a cabin hanging somewhere, one that actually goes up... and down. With sounds, damping, shakes, and WITHOUT letting the player fall through the floor (or ceiling). That doesn't sound too exciting, but the mechanics implemented to define a "path" to let an object move from A to B, including start/move/stop sounds is quite an useful addition to the Editor.


Paranormal activity 5: Squeaking doors
The doors you saw on several shots last posts can be opened, closed, slammed, locked, powered, unlocked, slide, rotate, et cetera. Not a surprise for a door, but we still needed some sounds accompanying these events. Sound composer Cesar offered services two weeks ago, so he quickly made sounds for footsteps, these doors, and that elevator mentioned above. Things look better when they sound good.


Guns of Navarone
Diego is almost done modeling & texturing an old weapon. Not yet one you could hold and fire, but that is mainly because A: the weapon isn't animated yet. And B: we don't even have a player character that can hold him. Yes you may remember the weird creature made of Minecraft brownies holding the flashlight in the very first movie. Hence, we already had animated players, doors and guns back then. Nevertheless, a lot of that has been discarded to rebuild *properly*. That means using good models, good animations, and good extendable code.
Got to soften some normals and add a bit love to the top part. And make it shoot of course.


Bones & Joints ballet
As said, that gun requires some animations and warm hands to hold it. And thus also animations. A month ago Antonio joined to help us on the animations, so hopefully we have some better stuff than my Milkshape attempts 2 years ago! That also means I'm making a new animation importer so we can export files from Maya or Blender. So far I made a start with Collada files coming out of Blender, but it seems a few details are missing in those files... I hate writing file parsers.


A bit of this and that
Federico made some textures & objects, Julio is busy with a monster(!), Diego is cooking some pans and counter objects for a kitchen map, Colin wove a carpet and is now doing a shower map, and me myself made some extra corridor-maps around the demo area's we're currently doing. Besides making demo movies, we also need a “playground” to test game mechanics (running for monsters, solving a puzzle, killing something) sooner or later.


Fake SSS
SSS is not the Super-SchutzStaffel, it stands for Sub-Surface-Scattering. A technique to make surfaces semi-translucent (think about shining a flashlight behind your hands to see the edges of your fingers glow red). The Radar demo already implemented sort of scattering for the ice effect, but this time we're looking for a more blurry wax-kinda like effect. Doing it "physically" correct is hard to achieve in graphics, as 3D rendering is typically not aware of material thickness and substructures. Luckily smart boys and girls have some tricks for that...


Improved SSAO
It’s just one of those things that has to happen once in a while. Removed some artefacts (made some other new ones) and made it working a bit better for normalMapped surfaces. Instead of just randomly picking neighbors around a pixel, it actually shoots a bunch of rays into the scene, depending on the pixel normal. But instead of raymarching, the rays take a single step so there is a chance they skip thin surfaces.
Don't expect awesome Ambient Occlusion, SSAO will always remain a hack, and hopefully we can throw it away one day. But until then...


UI
One of the upcoming demo's contains interactivity with the environment and also picking up stuff. It would be nice to show an inventory with that, or at least the typical "open door", "pick up spoon" or "kick the cat" symbols. We never had someone working on the UI really though, so I asked around a week ago. Luckily Federico knows a friend who did plenty of websites and (2D) design things. So, let's see where that leads too.

As often, the code to produce UI has already been written partially, but I’m just waiting to get it being used by something good looking. No. I don’t work with test dummies. I can’t put my scarce free time on coding “angry cubes” for monster A.I. or MS Paint backgrounds for an UI. I’d rather wait until an artist gives me company and produce something good together.


Realtime GI
I almost don't dare to write anything about this, because each time when you think you're close, either the performance or visual end results (or both) are… “meh…. Paperboy pre-baked lighting were better” quality. A super complex 3D algorithm doesn’t automatically means super cool graphics. Yet, quite some time is spend on my GI quest, because it's basically the most important missing link the T22 graphics pipeline. Most of the shots you see use a simplified "faked" ambient lighting setup, and I want to get rid of that. Either we make it real-time, or at least we implement a proper pre-baked solution (like most games do). At this point I'm learning OpenCL Compute Shaders (I'll write about them soon), to see if I can implement something like Crassin & Co did in their awesome Voxel Cone Tracing demo.


Sharpened graphic-knifes
The shots lately may not show it, as the rooms are far from finished. But I did some small but important changes. All those years, the DoF (Depth of Field) effect had a little bug, causing a small blur on *everything*. Well, see below. Asides that, the HDR / Tone Mapping process has been improved. Eyes adapt more to the environment, bloom-blur flickers less, and also important, the colors are less washed out (closer to the original image colors) after tone-mapping.
It's a bit hard to see since JPEG compresses stuff here already, but those numbers on the telephone dials for example, couldn't be read anymore after a meter or more.


Prostitution
Hur? Seems we can't finish a demo movie end this year. I could try to promise, but time flies even when you're not having fun. The main problem is, as usual, most people being busy. In order to get something playable, we need rooms, textures. monsters, items, objects, sounds, and so on. Especially making the 3D contents just takes too long, mainly because the 3D guys are busy with other (freelance) work as well. Got to make some money in these harsh times!

The goal is to make more demo movies (which are also part of the actual game btw so we fight at 2 fronts) to attract more 3D people. But... we still have to finish that movie first right? Better quality movies attracts more/better artists. Chicken / Egg story. So we came up with another little thing that may boost productivity a bit: Prostitution. I mean, selling some of the assets we make.

Here the idea: Some of the common stuff (furniture, plank textures, barrels, decorations, junk) that aren't directly eye-catchers for T22 scenes can be sold by their creators at 3D webshops (such as Unity3D). The profit will go to the author, which hopefully brings T22 tasks higher on his/her priority ladder. As a return, I get the object/texture (of course) and the sellable item will refer to this project on whatever website its getting sold.



Got it all written down? Now you're up-to-date again ;) Wait. One more thing... just a typical computer / techworld thingie... Why oh why does EVERY soft/hardware manufacturer place a photo of a pretty smiling helpdesk girls? 95% of the people I spoke/wrote/saw related to technical products were older men, nerds, schoolboys, humorless cyborgs or transgender. The few remaining women usually didn't know what I was talking about and connected me further to a smart boy somewhere locked in the basement between oscilloscopes and short circuited devices. Just wanted to say that. Stupid marketing tricks.
Fuck that, they never look that way when they visit us when promoting their camera systems, hydraulic valves, software or touch-screens. Hence I once worked at a company that used a pretty helpdesk-girl on their website as well. I can assure you, never saw her (or any other woman for that matter, except someones mother bringing lunch).

Wednesday, October 31, 2012

3D survival guide for starters #3, NormalMapping

Hey, fatman! Just heard a funny fact that makes you want to buy a horrorgame like T22, if you're too heavy that is. Seems watching scary movies or playing horror games is pretty healthy: watching The Shining burned more than 180 kilocalories! Because of fear/stress, you eat less and burn more. So go figure... playing T22 on a home-trainer screen should be even more tense than Tony Little target training or Zumba dancing.


Normals
-----------------------
Enough crap, lets continue these starters guide posts, explaining the world famous "bumpMap". Or I should say "normalMap", because that is what the texture really contains: normals. But what exactly is this weird purple-blue looking texture? So far we showed several techniques to affect surface properties per pixel:
albedo(diffuse) for Diffuse lighting
specularity for specular lighting
shininess for specular spreading
emissive for making pixels self-emitting

Another crucial attribute is the surface "normal". It’s a mathematical term you may remember from school. Or not. “A line or vector is called a normal if its perpendicular to another line/vector/object”. Basically it tells in which direction a piece of surface is heading to. In 2D space we have 2-axis’s, X(horizontal) and Y(vertical). In 3D space we have -surprise- 3 axis’s: X, Y and Z. Your floor is facing upwards (I hope so at least), your ceiling normal is pointing downwards. If the Y axis would mean "vertical", the floor-normal would be {x:0, y:+1, z:0}, the ceiling-normal {x:0, y:-1, z:0}. And then your walls would point somewhere to the -X, +X, -Z or +Z direction. In a 3D model, each triangle has a certain direction. Or to be more accurate, each vertex stores a normal, eventually a bit bended towards its neighbor to get a smoother transition. That is called "smoothing" by the way.

As explained in part 1, older engines/games did all the lighting math per vertex. I also showed some shader math to calculate lighting in part 2, but let's clarify a bit. If you shine a flashlight on a wall, the backside of that wall won't be affected (unless you have a Death Star laser beam). That's because the normal of the backside isn't facing towards your flashlight. If you shine light on a cylinder shape, the front will light up the most, the cylinder sides will gradually fade away as their normals face away from the flashlight further and further. This makes sense, as the surface there would catch less light photons. Lambertian (cosine) lighting is often used in (game)shaders to simulate this behavior:

NormalMapping
-----------------------
Since we have relative little vertices, per-vertex lighting is a rapid way to compute the scene lighting, but it doesn't allow to have a lot of geometrical detail/variation on your surfaces, unless you tessellate them (means dividing them into a LOT of tiny triangles). So, a football field made of a single quad (=2 triangles, 4 corner vertices) would only calculate the lighting at 4 points and interpolate the lighting between them to get a smooth gradient. In this case, the entire soccer field would be more or less equally litten.

However, surfaces are rarely perfectly flat. A football field contains bumps, grass, sand chunks, holes, et cetera. Same thing with brick walls, or wood planks. Even a vinyl floor still may have some bumps or damaged spots. We could tessellate the 3D geometry, but we would need millions of triangles even for a small room to get sufficient detail. C'est pas possible. That's french for "screw it".

This is why old games drew shading-relief into the (diffuse)textures. However, this is not really correct of course, as shadows depend on the lightsource(s) locations. If you move a light from up to down, the shades on a wall should change as well. Nope, we needed something else... Hey! If we can vary diffuse, specularity and emissive attributes per pixel, then why not vary the normals as well?! Excellent thinking chief. "BumpMapping" was born, and being implemented in various ways. The winning solution was "Dot3 normalMapping". As usual, we make yet another image, where each pixel contains a normal. Which is why the correct name is "normalMap" (not "bumpMap"). So instead of having a normal per vertex only, we now have a normal for each pixel (at least, depends a bit on the image resolution of course). So, for a brick wall the parts that face downwards will encode a normal value into this image, causing the pixel to catch less light coming from above.

Obviously, you can't draw multiple lighting situations in a diffuseMap. With normalMapping, this problem is fixed though. Below is a part of the brick normalMap texture:


Image colors
Now let's explain the weird colors you see in a typical normalMap. We start with a little lesson about how images are build up. Common image formats such as BMP, PNG or TGA have 3 or 4 color channels: Red, Green, Blue, and sometimes "Alpha" which is often used for transparency or masking. Each color channel is made of a byte (= 8 bits = 256 different variations possible), so an RGB image would store each pixel with 8 + 8 + 8 = 24 bits, meaning you have 256*256*256 = 16.973.824 different colors.

Notice that some image formats support higher color depths. For example, if each color channel gets 2 bytes (16 bit) instead of 1, the image size would be twice as big, and the color palette would give 281.474.976.710.656 possibilities. Now having so many colors won't be useful (yet), as our current monitors only support 16 million colors anyway. Although "HDR" monitors may not be that far away anymore. Anyway, you may think images are used to store colors, but you can also use it to store vectors, heights, or basically any other numeric data. Those 24 bits could just as well represent a number. Or 3 numbers in the case of normalMapping. We "abuse" the
Red color = X axis value
Green color = Y axis value
Blue color = Z axis value
About directional vectors such as these normals: these are noted as "normalized" vectors (also called "unit vectors"). That means each axis value is somewhere between -1 and +1. And the length of the vector must be exactly 1. If the length was shorter or longer, the vector isn't normalized (a common newbie mistakes when writing light shaders).

You can forget about "normalized" vectors for now, but it’s important you understand how such a vector value is converted to a RGB color value. We need to convert each axis value (-1..+1) to a color channel (0..255) value. This is because we had 1 byte(8 bits) per channel, meaning the value can be 0 to 255. Well, that is not so difficult:
((axisValue + 1) / 2) * 255
((-1 +1) / 2) * 255 = 0 // if axis value was -1
(( 0 +1) / 2) * 255 = 128 // if axis value was 0
((+1 +1) / 2) * 255 = 255 // if axis value was +1
Do the same trick for all 3 channels (XYZ to RGB):
color.rgb = ((axisValue.xyz) + {1.1.1}) / {2,2,2} ) * {255,255,255}
Let's take some examples. Your floor, pointing upwards would have {x:0, y:+1, z:0} as a normal. When converting that to a color, it becomes
red = ((-1 + x:0) / 2) * 255 = 128
green = ((-1 + y:0) / 2) * 255 = 255
blue = ((-1 + z:0) / 2) * 255 = 128
A bright greenish value (note red and blue are halfgray values, not 0). If your surface would face in the +X direction, the value would be bright red. If it would face in the -X direction, it would be a dark greenblue value (no red at all).



Obviously, normalMaps aren't hand-drawn with MS Paint. Not that its impossible, but it would mean you'll have to calculate the normal for each pixel. That's why we have normalMap generators, software that generates (approximates) normals out of a height-image, or by comparing a super detailed (high-poly) model with a lower detailed (game) model. The high-poly model contains all the details such as scratches, skin bumps or wood reliefs in a real 3D model. Usually programs like ZBrush or Mudbox are used to create those super high polygon models. Since we can't use those models in the game, we make a low-poly variant of it. When we compare the shapes, a normalMap can be extracted from that. Pretty awesome right?

Either way, once you have a normalMap, shaders can read these normals. Of course, they have to convert the "encoded" color back to a directional vector:
pixel.rgb  = textureRead( normalMap, texcoords );
// ! note that texture reads in a shader give colors 
//   in the (0..1) range instead of (0..255)
//   So we only have to convert from (0..1) to (-1..+1)
normal.xyz = {2,2,2} * pixel.rgb - {1,1,1} 

You are not ill, you're looking at a 3D scene drawings its normals. This is often used for debugging, checking if the normals are right. If you paid attention, you should know by now that greenish pixels are facing upwards, reddish into the +X , and blueish in the +Z direction. You can also see the normals vary a lot per pixel, leading to more detailed lighting, Thanks to normalMap textures.


Why is the ocean blue, and why are normalMaps purble-blue? > Tangent Space
--------------------
All clear so far? Good job. If not, have some coffee, play a game, eat a steak, and read again. Or just skip if you don’t give a damn about further implementations of normalMapping. With the knowledge so far, a wood floor normalMap would be a greenish texture mainly, as most normals point upwards. Only the edges or big woodnerves would be reddish or blueish / darker. However, it seems all those normalMaps appear purple/blueish (see bricks above). Why is that? The problem is that we don't always know the absolute “world” normals at forehand. What if we want to apply the same floor texture on the ceiling? We would have to invert the Y value in a different texture. If we wouldn't do that, the lighting becomes incorrect, as the shader still thinks the ceiling is facing upwards instead downwards. And how about animated objects? They can rotate, tumble and animate in all kinds of ways. There is an infinite number of possible directions a polygon can take.

Instead of drawing million different normalMaps, we make a "tangentSpace normalMap". What you see in these images, is the deviation compared to the global triangle normal. Huh? How to explain this easy… All colors with a value of {128,128,255} –yes, that blue purple color- indicate a normal of {0,0,+1}. This means the normal is exactly the same as its parent triangle(vertex) normal. As soon as the color starts “bending” a bit (less blue, more or less green/red), the normal bends away from its parent triangle. If you look at this bricks, you’ll see the parts facing forward (+Z direction) along with the wall, are blue-purple. The edges of the bricks and rough parts start showing other colors.


Ok, I know there are better explanations that also explain the math. What’s important is that we can write shaders that do tangent normalMapping. In combination with these textures, we can paste our normalMaps on all surfaces, no matter what direction they face. You could rotate a barrel or put your floor upside down; the differences between the per-pixel normals and their carrier triangle-normals will remain the same.

It’s also important to understand that these normals aren’t in “World-Space”. In absolute coordinates, “downwards” would be towards the core of the earth. But “down” (-Y = less green color) in a tangentSpace normalMap doesn’t have to mean the pixel will actually look down. Just flip or rotate the texture on your wall, or put it on a ceiling. –Y would point into a different direction. A common mistake is to forget this, and try to compare the “lightVector” (see pics above or previous post) calculated in world-space with this tangentSpace normal value. To fix this problem, you either have to convert the tangentSpace normal to worldNormals first (that’s what I did in the monster scene image above), or you convert your lightVector / cameraVectors to tangentspace before comparing them with the normal to compute diffuse/specular lighting.

All in all, tangent NormalMapping requires 3 things:
• A tangentSpace normalMap texture
• A shader that converts vectors to tangentspace OR normals to World space
• Pre-calculated Tangents and eventually BiTangents in your 3D geometry, required to convert from one space to another.

Explaining how this is done is a bit out of scope of this post, there are plenty of code demo’s out there. Just remember if your lighting is wrong, you are very likely comparing apples with oranges. Or worldspace with tangentspace.
When putting it all together, things start to make sense. Here you can see the different attributes of pixels. Diffuse(albedo) colors, specularity, normals. And of course to what kind of lighting results they lead. All that data is stuffed in several textures and applied on our objects, walls, monsters, and so on.


NormalMaps conclusion
-----------------------------------
NormalMaps are widely used, and they will stick around for a while I suppose. They won't be needed anymore if we can achieve real 3D geometry accurate enough to contain all those little details. Hardware tesselation is promising, but still too expensive to use on a wide scale. I should also mention that normalMaps have limitations as well. First of all, the actual 3D shape still remains flat. So when shining a light on your brick wall, or whatever surface, you'll see the shading nicely adapt. But when looking from aside, the wall is still as flat as a breastless woman. Also, the pixels that face away from the lightsources will shade themselves, but won't cast shadows on their neighbors. So normalMapping only affects the texture partially. Internal shadow casting is possible, but requires some more techniques. See heightMaps.

So what I'm saying, normalMaps certainly aren't perfect. But with the lack of better (realtime) techniques, you'd better get yourself familiar with them for now.


I thought this would be the last post of these "series", but I still have a few more techniques in my sleeves. But this post is already big and difficult enough so let's stop here before heads will start exploding. Don't worry, I'll finish next time with a shorter, and FAR easier post ;)

Tuesday, October 16, 2012

3D survival guide for starters #2

* Note. If any of you tried to contact via the T22 website "Contact page", those mails never arrived due some security doodle. It should work again. Apologies!!

Let's continue this "beginner graphics tour". Again, if you made shaders or a lot of game-textures before, you can skip. But keep hanging if you want to know more about how (basic) lighting works in games, and what kind of textures can be used to improve detail.

Last time we discussed how the classic "texture" is applied on objects. Basically it tells which colors appear on an object, wall, or whatsoever. To add a bit of realism, that color can be multiplied with the color from lightsources (sun, flashlight, lamps, ...) that affect this pixel. Being affected means the lightrays can "see"/reach this pixel directly or indirectly (via a bounce on another surface). A very crude lighting formula would go:
...resultColor = texturePixelColor * (light1Color + light2Color + ... + lightNcolor)

Thus we sum up all incoming light, and multiply it with the surface color (usually fetched from a texture). If there are no lights affecting the surface, the result appears black. Or we can use an ambient color:
...resultColor = texturePixelColor * (light1Color + light2Color + .... + lightNcolor + ambientColor)

This formula is simpl, but checking if & how much a light affects a surface is a whole different story though. It depends on distance, angles, surroundings, obstacles casting shadows, and so on. That's why old games usually had all this complex stuff pre-calculated in a so called lightMap, just another texture being wrapped over the scene and multiplied with the surface textures. The big problem of having it all pre-computed is that we can't change a lighgt. So, old games had static lights (= not moving or changing colors), no shadows on moving objects, and/or only very few physically incorrect lights that would be updated realtime.

Shadows? Anyone? The little lighting around the lamps are pre-baked in low resolution lightMaps. Halflife 1 was running on an enhanced Quake2 engine btw.

When shaders introduced themselves in the early 21th century, we programmers (and artists) suddenly had to approach the lighting on a more physical correct way. Shaders by the way are small programs running on the videocard that calculate where to place a vertex, or how a certain pixel on the screen should look(color/value/transparency). With more memory available and videocards being able to combine multiple textures in a flexible programmable way, a lot of new techniques popped up. Well, not always new really. The famous "bumpMapping" for example was already invented 1978 or something. But computers simply didn't have the power to do anything with it (realtime), although special 3D rendering software may have used it already. Think about Toy Story (1995).

Anyway, it caused several new textures to be made by the artists, and also forced us to call our textures in a bit more physical-correct way. The good old "texture" on objects/walls/floors suddenly became the "diffuseTexture", or "albedoTexture". Or "diffuseMap" / "albedoMap". Why those difficult words? Because the pixels inside those textures actually represent how the diffuse light should reflect, or the "albedo property" of your material. Don't worry, I'll keep it simple. Just try to remember your physics teacher telling about light scattering. No?


The art of lighting

When lightrays (photons) fall on a surface, a couple of things happen:
- Absorption:
A part of the photons gets absorbed (black materials absorb most).
- Specular reflection:
Another portion reflects on the surface "normal". In the pic, the normal is pointing upwards. This is called "specular lighting". The "specularity property" tells how much a material reflects. Typically hard polished surfaces like metals, plastics or mirrors have a relative high specularity.
- Diffuse scattering:
The rest gets scattered in all directions over the surface hemisphere. This is called "diffuse lighting". The "albedo property" of a material tells how much gets scattered. So high absorbing(dark) surfaces have a low albedo. The reason why it scatters in all directions is because the roughness of a material (on a microscopic level).
In other words, photons bounce of a surface, and if they reach your eye, you can see it. The amount of photons you receive depends on your position, the position and intensity of the lightsource(s), and the albedo + specular properties of the material you are looking at. The cones in your eyes convert that to an image, or audio waves if you are tripping on something. So. your texture -now called albedo or diffuseMap- tells the shader how much diffuse-light(and in which color) it should bounce to the camera. Based on that information, the shader calculates the resulting pixel color.
…
diffuseLight = sum( diffuseFromAllLights ) * material.albedoProperty;
specularLight = sum( specularFromAllLights ) * material.specularProperty; 
pixelColor = diffuseLight + specularLight + material.Emissive

// Notice that indirect light (GI) is also part of the diffuse- and specular light
// However, since computing all these components correctly, GI is usually added
// afterwards. Could be a simple color, could be an approximation, could be anything…
pixelColor += areaAmbientColor * material.albedoProperty * pixelOcclusion;
This stuff already happened in old (90ies era) games, although on a less accurate level (and via fixed OpenGL/DirectX functions instead of shaders). One of the main differences was that old graphics-engines didn't calculate the diffuse/specular lighting per screen pixel, but per vertex. Why? Because its less work, that's why. Back then, the screen resolution was maybe 640 x 480 = 307.200 pixels. That means the videocard (or still CPU back then) had to calculate the diffuse/specular for at least 307.200 pixels. Likely more, in case objects would overlap each other the same pixel would be overwritten and required multiple calculations. But what if we do it per vertex? A monster model had about 700 vertices maybe, and the room around you even less. So a room filled with a few monsters and the gun in your hands had a few thousand vertices in total. That is way less than 307.200 pixels. So, the render-engine would calculate the lighting per vertex, and interpolate the results over a triangle between its 3 corner-vertices. Interpolation is a lot cheaper than doing light math.


DiffuseLighting & DiffuseMaps
--------------------------------
I already pointed it out more or less, but let's see if we get it (you can skip the code if you are not interested in the programming part btw). Pixelcolors are basically a composition of diffuse, specular, ambient and emissive light. At least, if your aim is to render "physically correct". Notice that you don't always have to calculate all 4 of them. In old games materials rarely had specular attributes (too difficult to render nicely), ambientLight was a simple fake, and very little materials are emissive (= emitting light themselves). Also in real life, most light you receive is the result of diffuse lighting, so focus on that first.

We (can) use a diffuse- or albedo image to tell the diffuse reflectance color per pixel. The actual lighting math is a bit more complicated, although most games reduced it to a relative simple formula, called "Lambertian lighting":
// Compute how much diffuse light 1 specific lamp generates
// 1. First calculate the distance and direction vector from the pixel towards
// the lamp.
float3 pixToLightVector = lamp.position - pixel.position;
float  pixToLightDistance = length( pixToLightVector ); 
       pixToLightVector = normalize( pixToLightVector ); // Make it a direction vector
// 2. Do Cosine/Lambert lighting with a dot-product.
float diffuseTerm = dot( pixel.normal, pixToLightVector );
      diffuseTerm = max( 0, diffuseTerm ); // Value below0 will get clamped to 0

// 3. Calculate attenuation
// Light fades out after a distances (because lightrays scatter)
// This is just a simple linear fall-off function. Use sqrt/pow/log to make curves
float attenuation = 1 - min( 1, pixToLightDistance / light.falloffDistance );

// 4. Shadows (not standard part of Lambert lighting formula!!)
// At this point (in this shader) you are not aware if there are obstacles 
// between the pixel and the lamp. If you want shadows, you need additional 
// info like shadowMaps to compute wether the lamp affects the pixel or not.
float shaded = myFunction_InShadow( ... ); // return 0..1 value

// 5. Compose Result
lamp1Diffuse = (diffuseTerm * attenuation * shaded) * lamp1.color;

// ... Do the same thing for other lights, sum the results, and multiply 
// with your your albedo/diffuse Texture


Wanting to learn programming this? Print this picture and hang it on the toilet so you can think about it each time when pooping. works like a charm.

You don't have to understand the code really. Just realise that your pixels get litten by a lamp if
A: The angle between the pixel and lamp is smaller than 90 degrees (dot)
B: The distance between pixel and lamp is not too big (attenuation)
C: Optional, there are no obstacles between the pixel and the lamp (casting shadows)

One last note about diffuseMaps. If you compare nowadays textures to older game textures, you may notice the modern textures lack shading and bright highlights. The texture is more opaque... because it's a diffuseTexture. In old games, they didn't have techniques/power to compute shadows and specular highlights in realtime, so instead they just drawed ("pre-baked") that into the textures to add some fake relief. Nowadays many of those effects are calculated at the fly, so we don't have to draw them anymore in the textures. To some extent, it makes drawing diffuseMaps easier these days. I’ll explain in the upcoming parts.



SpecularLighting & SpecularMaps
--------------------------------
Take a look back at one of the first pictures. Diffuse light scatters in all directions and therefore is view-independent. Specular lighting on the other hand is a more concentrated beam that reflects on the surface. If your eye/camera happens to be at the right position, you catch specular light, otherwise not. Specular lighting is typically used to make polished/hard materials “shiny”.

Again, this is one of those techniques that came alive along with per-pixel shaders. Specular lighting has been around for a long time (a basic function of OpenGL), but not used often in pre-2000 games. A common 3D scene didn’t (and still doesn’t) had that much triangles/vertices. Since older lighting methods were using interpolation between vertices, there is a big lack of accuracy. Acceptable for diffuseLighting maybe, but not for specular highlights that are only applied very locally. With vertex-lighting, specular would appear as ugly dots. So instead, games used fake reflection images with some creative UV mapping (“cube or sphere mapping”) to achieve effects like a chrome revolver. Nowadays, the light is calculated per pixel though, so that means much sharper results.

If you looke carefully, you can literally count the vertices on the left picture. Also notice the diffuse light (see bottom) isn't as smoothy, although less noticable.

That gave ideas to the render-engine nerds... So we calculate light per pixel now, and we can vary the diffuse results by using a diffuse/albedo texture... Then why not make an extra texture that contains the specularity per pixel, rather than defining a single specular value for the entire object/texture instead? It makes sense, just look at a shiny object in your house, a wood table for example. The specularity isn't the same for the entire table. Damaged or dirty parts (food / dust / fingerprints / grease) will alter the specularity. Don't believe me? Put a flashlight on the end of the table, and place your head at the other hand of the table. Now smear a thin layer of grease on the middle of the table. Not only does it reduce the specurity compared to polished wood, it also makes the specular appear more glossy/blurry, as the microstructure is more rough. You can only see this btw if the flashlight rays exactly bounce into your eyes. If you stand above the table, the grease may become nearly invisible. That's why we call specular lighting "view dependant".

Dirty materials, or objects composed of multiple materials (old fashioned wood TV with glass screen and fabric control panel) should have multiple specular values as well. One effective way is to use a texture for that, so we can vary the specularity per pixel. Same idea as varying the diffuse colors in an ordinary texture (albedo / diffuseMap). A common way to give your materials a per-pixel specularity property, is to draw a grayscaled image where bright pixels indicate a high specularity, and dark pixels a low value. You can pass both textures to to your shader to compute results:
float3 pixToLightVector   = lamp.position - pixel.position;
float  pixToLightDistance = length( pixToLightVector ); 
       pixToLightVector   = normalize( pixToLightVector );
float3 pixToEyeVector     = normalize( camera.Position - pixel.position );

// Do Cosine/Lambert lighting
float diffuseTerm = dot( pixel.normal, pixToLightVector );
      diffuseTerm = max( 0, diffuseTerm );
float3 diffuseLight = diffuseTerm * lamp.color;

// Do Blinn Specular lighting
float3 halfAngle  = normalize( pixToLightVector + pixToEyeVector );
float  blinn      = max( 0, dot( halfAngle, pixel.normal.xyz ) );
float  specularTerm = pow( blinn , shininessPowerFactor );
float3 specularLight= specularTerm * lamp.color; // eventually use a 2nd color

...apply shadow / attenuation...

// Fetch the texture data
pixel.albedo       = readTexture( diffuseMap, uvCoordinates  );
pixel.specularity  = readTexture( specularMap, uvCoordinates );

diffuseLight  *= pixel.albedo;
specularLight *= pixel.specularity;

Notice that this is not truly physical correct lighting, it's an approximation suitable for realtime rendering. Also notice that the method above is just one way to do it. There are other specular models out there, as well as alternate diffuse models. Anyway, to save some memory and to make this code a bit faster, we can skip a "readTexture" by combining the diffuse-and specularMap. Since the specularity is often stored as a grayscale value, it can be placed in the diffuseMap alpha channel (resulting in a 32-bit texture instead of 24bit). The code would then be
// Get pixel surface properties from vertices and 2 textures
float4 rgbaValue = readTexture( diffuseSpecMap, uvCoordinates ) 
   pixel.albedo   = rgbaValue.rgb;
   pixel.specularity = rgbaValue.a;

However, some engines or materials chose to use a colored specularMap, and eventually use its alpha channel to store additional "gloss" or "shininess" data. This is a coeffcient that tells how sharp the specular highlightwill look. A low value will make the specular spread wider over the surface (more glossy), while high values make the specular spot very sharp and narrow. Usually hard polished materials such as plastic or glass have a high value compared to more diffuse materials.
spec = power( reflectedSpecularFactor, shininessCoefficient )



EmissiveLighting & EmissiveMaps
--------------------------------
This one is pretty easy to explain. Some materials emit light themselves. Think about LEDs, neon letters, or computer displays. Or radioactive zombies. When using diffuse- and specular light only, pixels that aren't affected by any light will turn black. You don't want your TV appear black in a dark room though, so one easy trick is to apply an EmissiveMap. This is just another texture that contains the colors that are emitting light themselves. Just add that value to the result, and done. The shader code from above would be expanded with
resultRGBcolor = blabla diffuse blabla specular blabla ambient
resultRGBcolor = resultRGBcolor + textureRead( emissiveMap, uvCoordinates );
Notice that emissiveMapping is just another fake though. With this, your TV screen will appear even in dark rooms. And it may also get reflected and cause a "bloom"(blur) around the screen in modern engines. BUT, it does not actually emit light to other surfaces! It only affects the pixels you apply this texture on, not the surrounding environment... unless you throw some complex code against it, but most games just place a simple lightsource inside the TV, or ignore it completely.

The charming man on the TV and the lamp in the background use emissive textures. However, the TV does not really cast light into the scene (other than the blurry reflection on the ground).



I could go on a bit longer, but let’s take a break. I remember when I had to read this kind of stuff ~7 years ago… Or well, probably I didn’t really read it. My head always starts to hurt if I see more than 3 formula’s with Sigma and 10th order stuff. But don’t worry, just go out and play, you’ll learn it sooner or later, and documents like this will confirm (or correct) your findings. The next and last part will reveal the famous “bumpMaping” technique, plus describes a few more texture techniques that can be used in modern games. Ciao!

Thursday, October 4, 2012

3D survival guide for starters #1

Ever been discussing about new games with friends? Are you one of those guys/girls that go like "yeah yeah yeah, that's r-r-right" (what’s the name of that green annoying cartoondog?), once a friend mentions EngineX looks even better due parallax-normal-displacement-crunch-mapping, while you have no clue what the hell he is talking about? Are you one of those aggressive game-forum guys, arguing about whether the Xbox or PS3 looks better, but never really understand your own arguments? Or, maybe you are just curious what those other nerds are talking/arguing about, when they drop fancy-pants words like “specular-map”? If you are already a 3D artist or did shader programming, you can safely skip, but otherwise this and 1 or 2 upcoming posts may make you sound even cooler on those forums. Or well, at least you know what you’re talking about then.
Not a superb shot this unfinished room, but can decipher the techniques being used here?

There's a lot of shit arguments on the internet all right. For some reason, boys want to dominate digital discussions by saying difficult tech-terms, like a dog marks territory by pissing against poles. And the funny thing is, a lot of arguments are based on "something they heard", and then misinterpreted. I remember the “Next-Gen” vibe 5 years ago. New technologies are treated like magical ingredients only possible on platform-X, but the truth is that any modern platform can do all tricks, as long as the programmers are smart enough and the platform fast enough to handle it realtime (meaning at least 25 times per second). A Wii can do Parallax-Occlusion-Mapping or realtime G.I. too, but at such a large cost it's unlikely to be implemented in a game. If you really want to compare game consoles on visual capabilities, then look at the videocards, available memory, and shader instruction sets.

Either way, good graphics start with five main components:
A: Proper Lighting
B: Proper Texturing
C: 3D models / worlds
D: Technicians & Hardware making it possible
E: Creative people making good use of A,B,C and D
How you do it, doesn't matter until you start meeting the platform limitations (which is pretty soon usually). These posts focus on the crucial component “Texturing”, and the technology behind it. Oh, for starters, with a texture we mean the images being pasted on 3D models. High-end engines can still look like shit when using bad textures. Low-end engines can still look good when using good textures. Usually when we develop a new Tower22 environment, the first versions look pretty bad. The same happens when a amateur makes a map in a UDK or whatever powerful engine. Techniques such as shadows and reflections mask the uglyness a bit, but in the end it's still like smearing expensive make-up on a pig. Then on the other hand, games such as Resident Evil4(Gamecube / PS2), God of War(PS2) or Mario Galaxy(Wii) still look good while their engines really weren't exactly cutting edge technical miracles. Neither back then, compared with PC engines such as Source(Halflife2), CryEngine(Farcry) or IdTech (Doom3/Quake4).
How your grandpa used to Render:
Ok, but how does it work? Probably you heard about bumpMaps and stuff, but maybe no idea what it really does. Let's just start with the very basics then. Remember Quake1 (1996)? That is one of the first (if not the first commercial) true 3D games, where the worlds were made of 3D polygon models, using textures to decorate the walls and objects... Or maybe It wasn’t the first 3D game actually. The Super Nintendo already had a couple of games using the Super FX chip, like Star Fox and Super Stunt FX. These games rendered flat (animated) pictures called "sprites", and simple 3D geometry shapes (triangles, cubes, floor-plane) with a certain color. The space ships for example were a bunch of gray and blue triangles, and an orange triangle as a booster. Not much detail of course, but some of those triangles even carried an image to give it some more detail. See the cockpit above.

And also Doom, Hexen, Wolfenstein and Duke Nukem were sort of 3D (“2.5D”) of course. Although the technology used back then is way different than Quake1 used, which is still the footprint for nowadays games. The way how Wolftenstein actually looks more like how a raytracer works; each screen pixel (and the screens didn’t have much pixels back then fortunately) would fly away from the camera, and see where it would intersect a wall, floor or ceiling. Because of the limited CPU power, the collision detection had to be fast of course, and that explains the very simple level design. But what Wolftenstein also already did, was Texturing its walls. Depending on where the screen-ray intersected, the renderer would pick a pixel from the image applied on that wall.
Although old 2.5D rasterizers don't really compare to current techniques, the texture-mapping (texture-coordinates, UV-mapping, whatever they call it) is founded on the same principles.



Texturing in the 21 century
Texturing basically means to "wrap" a 2D image over a (3D) model. Typical game data therefore contains a list of 3D-models and image files, paired together. Asides having triangles, the 3D models also tell how to map the image on them by giving so called “Texture” or “UV” coordinates. As for the textures, those are just images you can draw MS Paint or Photoshop really. Nothing special. Although games often use(d) compressed files to save memory. The SNES didn't have enough memory & horsepower to texture each and every 3D shape, so most objects in those 3D games just used 1 or a few colors only.

You can do the math; an average jpeg photo from your camera or phone already takes a few megabytes. In other words, a single photo is larger than the total storage capacity of a SNES cartridge! But if you make a tiny image, with only a few colors, you can save quiet a lot though. Save a 32x32 pixels image as a 16-color bitmap, and it will only take 630 bytes = 0.62 KB = 0,0006 MB. Now that's more like it. Only problem was, the SNES still only had super little RAM memory, a matter of (64?) kilobytes. So it could only keep a few things active in its work-memory at a time.

Quake1 fully utilized texturing though. Every wall, floor, monster, gun or other object used a texture. Of course, PC's back then still had very limited memory and data bandwidth, so the textures were small and only had 256(or less?) color palettes. But for that time, it looked awesome. Either way, since Quake1 also became "Mod-able" for hobbyists at home, we ordinary gamers came in touch with editors, 3D models, sprites, UV coordinates, textures, and whatsoever. So, in resume:
- You make a 3D model (with 3D software such as Max, Maya, Lightwave, or game-map builders. Those are often made by hobbyists, or provided with the game(engine) itself.
. A game model is a list of triangles. Each triangle has 3 corners, called "vertices". A vertex is a coordinate in 3D space, having a X,Y and Z value. It can carry more (custom) data, but we talk about that later.
- so, yhe 3D model you store is basically a file that lists a bunch of coordinates
- An image is made for the object. Since 3D objects can be viewed from all direcetions, we need to unfold (unwrap) the 3D model all over the canvas.
- Each vertex(corner) also has a texture-coordinate (a.k.a. UV coordinate). These coordinates tell where the triangles are mapped on the 2D image.

The selected green triangles on the right are (2D) UV-mapped in the texture canvas on the left

That's how Quake1 worked, and that's how Halflife3 will still work (although... HL3 might not appear this century, who knows what technology we have then). So, basically this means all 3D objects will get their own texture. We also make a bunch of textures we can paste on the walls, floors and ceilings. Like putting wallpaper in your own house. Obviously, the texture quality goes hand in hand with the artist skills, and the dimensions of those textures. A huge image can hold a lot more detail than a tiny one. These days, textures are typically somewhere between 512x512 to 2048x2048 pixels, using 16 million colors. The file and RAM usage size varies somewhere between 0.7 and 4 MB for such textures, if not compressed.

The results also depend on how the UV coordinates were made. You can map a texture over a small surface, so the small surface receives a lot of pixels relative (but also repeats the texture pattern all the time). Or you stretch the texture over a wide surface, making is appear blurry and less detailed. This typically happened on large outdoor terrains in older games. Even if you have a 1000x1000 pixel texture, a square meter of terrain will still receive only 2x2 pixels if it’s about 500 x 500 meters big. Ugly blurry results. Games like the first Battlefield often fixed that by applying a second “detail texture “ over the surface, which repeated a lot more times. Such detail texture could contain the patterns of grass or sand for example.
We still use detail-mapping these days. The small wood-nerve structure on the closet is too fine/detailed to draw in the closet texture. So we use a second "wood-nerve" texture that repeats itself over the surface quite a lot of times.


Next time we crank up with lighting in (modern) engines and the kind of textures being used to achieve cool effects for that. For now, just remember textures are used to give color to those 3D models. Nothing new really, but an essential part of the process. As for the next X years in rendering evolution: Textures are here to stay, so deal with them. Keep your friends close, and your enemies closer -- Sun Tzu