Saturday, December 29, 2012

Overwatch: 2012

Heh, that last post wasn't scheduled but I had a real urge to write that weird dream down that morning. But what I really wanted to write, before 2012 ends (on the normal calendar, not that stupid Maya one), was sort of an overview.


Bummer, no new demo was released this year, neither any other spectacular news. Symptoms of yet another overambitious slowly dying project? No, and those demo’s sure will come in 2013. Yet I can’t say there was a lot of progress this year. And if we as a team really want to make a game, or even a playable demo prototype, we need to step on it. More manpower, more commitment, and more sweat and direction from my side. Instead of just saying “when it’s done”, I think we owe readers of this blog, and gamers that just like to see this horror game happening one day, insights in the development process. Because waiting sucks.

To be Busy, or not to be
---------------------------------------------
All right, what happened. The year started good with a new demo movie, and unexpected attention from several game websites, end 2011. New people joined, and the plans for 2012 were made; doing more (conceptual) design & making yet another demo. But, this time the demo should also be part of the actual game itself, so it can be used to make a start on the maps & gameplay implementation as well. As for the technical progress, usually I just make whatever is needed for a demo. Varying from new graphical techniques to sound support or a UI interface. I’m the kind of guy that needs visual input to work with, I won’t just go coding an AI system without having a cool animated monster. An “Event-driven” workstyle.


Sounds like the good ingredients. But even good plans, skills and personnel, are still no guarantee to accomplish your goals. The magic curse word this year, and probably recognizable for any non-paid job: “(very) busy”. If you can’t help fixing the neighbor's car because you are busy, you say you’re busy. If you can’t help because you want to stay in bed, you text message you are busy. If you didn't do shit because you spend the last week on beating Fifa 2012, you mail you were busy. Or you don’t mail / call / reply at all, because you were too busy to do just that. “Busy” is like a Star Trek reflector shield to bounce off work. And obviously when working with people at the other side of the world, it’s hard to verify if the busy-argument is valid or not.

Most people I know, aren’t as busy as they say/think. And yes, with multiple jobs, a family, friends and a house I know a little bit what I’m talking about. And no, I don't consider myself very busy. “Priority management” is what’s really going on. And many people just prefer to spend their free time on watching TV, playing games, sports, get drunk, sleeping or whatsoever, rather than doing difficult stuff. Such as making game content. And of course, there is nothing wrong with that. Every person should do whatever he damn pleases in his free time. It would be different if I was paying salary, but I’m not so whether I like it or not, I’ll have to accept. At the same time though, I wonder why people offer help if they’re really that busy.
Just a sketch for one of the rooms being made (and not finished yet) this year.

Sure, making time for a project is harder than it may sound. First, the quality-bar is raised pretty high, so usually artists that join are talented… which means they also make their livings with their talent. And since artists often work on a freelance basis where the client wants his product ASAP, T22 will get on a second place as soon as things get stressy. And maybe… maybe the artist wants to touch something else than a Wacom tablet when there are some free hours finally. That brings T22 down on the priority ladder.

Another thing I have to understand, is that T22 is not their baby-girl project. In my case, T22 gets priority over TV, gaming, shaving, eating and sleeping, because it’s my favorite waste of time. But an outsider doesn’t have this bond with the project of course. Most people that offer services, would like to sharpen their 2D/3D skills, or just like horror games in general and found the T22 movie cool. But in order to work like a horse on something, you need to get triggered. A horse you can spank, but with humans it works a bit different. Any idea why your boss is asking the same “when it’s done?” questions over and over again? To remind you, to trigger you. By nature, humans are sort of lazy. If nobody guides at your work, you probably do the fun tasks first and delay the boring stuff for later. Or you just play Mine Sweeper until 5 รณ clock. In the case of T22, I can’t spank, neither threat by reducing salary or firing people. Triggering should be based on giving fun, satisfying tasks. Something the artist can learn or be proud off.

But that’s not so easy either. Like any game, much of the T22 content is made of boring barrels, furniture, wallpapers, junk and corridors. It’s not that we’re making monsters, intestines and never-seen-before scenery all the time. Besides, even so, making a game is not exactly satisfying as it requires a LOT of energy and patience. Even relative simple assets can still take hours before the mesh and textures are well done. And then you still may have to wait for others before your work can shine in a polished screenshot or movie. The audio guys made a bunch of awesome soundtracks recently, but it will take a while before they can hear them in a finished room. Not so motivating to keep doing tracks. This is why smaller (Indie) games are far more realistic to accomplish. Satisfying results are supplied faster, which is the fuel behind any charity project.

Then last but not least, I think newcomers are often disappointed after a while. Quite some people joined T22 this year, but more than half of them also left. You’re not stepping into a spectacular horror-ride. No, you got to help me pushing the car first. An ugly little car stuck in the mud on a a hill. Because the team is small and “busy”, there is little momentum. Neither are there 100%-coolness-development-kits such as UDK to start working with. And neither will you get access to the game story, or more interesting positions such as becoming a lead-artist. You will get access eventually, but first you have to prove yourself. We work with a “quarantaine” system. Sadly, many people that join just aren’t motivated, not skilled or change their minds soon. So of course, I won’t reveal all the secrets until I can “trust” someone. Which is hard enough already in a virtual relation. It would help a lot if we could meet and see each other. And I’m not talking about Skype, but working on the physical same location. All these missing factors and reasons given above, will degrade T22 further on a person’s priority ladder. Below visiting grandma on Sunday.
Though this is an UDK shot, Entrophy & Vertex Painting and a bunch of textures like these were a good addition to the engine. They allow to vary the surfaces, such as the worn brick spots here.


2013 Battle goals
---------------------------------------------
Right, that’s an explanation for the slow progress. But more interesting, what we’re gonna do about it? Well, I have a bunch of plans, although it still requires cooperation. You can plan all you want, but without help it’s still worth nothing. Therefore, I try to make a few simple to understand goals that should be doable within the next 6 months. Goals like “finish the game” are too vague, especially for newcomers that have no clue what the size and plans of your project really are. Instead, try to make goals the artist sees and thinks ”Shit, I can do that!”. And also, focus on priorities. Don’t plan too much. As said, a year is shorter than you think, and due the fact that T22 will be relative low on people’s priority-ladders, it’s just not possible to finish as much as you would like. And changing plans all the time is a sign of bad management as well. Keep it real.

Below you will see the 2013 goals. Divided over sub-teams. Yet one more reason for the slow progress, is that we all have to wait on each other. So this year, I want to do things more in parallel. Put a few persons on task A, a few others on B, et cetera. Anyhow:
3D Team A - Detail (Julio, Federico, Diego, Colin):
• Finish Demo4
• Continue with props & textures for Demo3 & more

3D Team B - Mapping (Jonathan, Rick, ...? ):
• Find 1 or 2 mappers to create empty maps on a more global level
• Make (global) floor1

3D Team C - Characters (Robert, Antonio, Julio, ...? )
• Find 1 3D character modeler
• Finish Monster(s)
• Finish Player
• Animate them

2D / Design Team (Borja, Pablo, Federico, Diego, Rick, "UI" Pablo):
• Continue game design (drawing & discussion)
• Make mood drawings for each game section
• Floor 1 drawings
• Make an UI

Audio Team (David, Carsten, Cesar):
• Gun sounds / impacts
• Demo4 audio
• Monster sounds
• Player / UI sounds

Coding team (Rick):
• Upgrade Global Illumination lighting
• Upgrade (skeleton) animations system
• Enhance player controls
• Implement AI monster basics

Publishing ( Brian, Rick, others ):
• Publish Demo4
• Pascal Game Development magazine interview
• Open a WIP thread on Polycount
• Update website


2013 Prime Tactics
---------------------------------------------
That's about 2 to 4 goals per person. Of course, the goals above can be split further in subtasks, but I won’t give you the exact details as it would spoil things. But let me explain some of them. Most important probably is to fix the “Chicken & Egg” issue. We need talented & devoted boys/girls to get the 3D tasks done. It’s not that our current guys can’t make a room, player model or UI, but if they keep as busy as last year, it just won’t happen. So, looking for extra people makes sense. However, if you pay peanuts, you get monkeys. I don’t even have peanuts. The only way to reach your people, is by making something really nice, and publish it wisely… but it requires talented people in the first place to create that “nice thing”.

What I’m saying, the current team should try to finish Demo4 (which isn’t that big, but contains a few complicated assets). When it’s done, we publish it. Sure, we did that two times before as well. It delivered some extra manpower, but not as much as I hoped for. I think we need a more aggressive strategy this time if we really want to push forward. So:
• Pitch the demo to a popular games magazine in the Benelux (Netherlands / Belgium)
• Possibly do the same thing for Spain
• Article in the Pascal Gamer Magazine
• Start a WIP thread on typical 3D gathering sites such as Polycount
• Let the artists earn a bit of money by making some of their assets sellable on 3D Webshops. Meaning they can put some of their work on Unity for example.
• Spread flyers above North Korea

Why focus on Spain and the Netherlands? Well, I’m Dutch, and 80% of the team are Spaniards. Not that we want to discriminate the rest of the world, but I think it just works better if we have “clusters”. A few of the Spaniards live in or nearby Madrid and actually meet each other. That surely helps to motivate as they can talk and help each other. A team isn’t just based on people doing the same thing. There must also be an emotional bond with your colleagues. But how to create one with people you never saw, speaking a different language? Most of us can write English pretty decently, but making a joke or having an interesting conversation other than T22, is hard. This isolates members from the team, as they only communicate with me mostly. Not exactly how a team should work.

So, creating sub-teams with geography in mind, might work better to start with. Other than that, I just feel the Netherlands might welcome a project like this, as we don’t have a booming Gamedev scene here. The other points mentioned above are also to get more attention. But again, it requires work from the artists first before we can paste interesting screenshots or drawings on forums or in a magazine.

Let’s see, what was more on the list? Making an UI. Requires an UI artist obviously. We actually found a guy who would like to help, but so far I didn’t hear much. You know… busy. I’ll give it some more time, and otherwise we may have to look for someone “less busy”. Then we had “game design” and “making maps on a more global level”. What does that mean?


Further, “Global mapping”. What the hell does that mean? We can keep making demo’s forever, but at some point we want something playable too. Not just for you, but also for ourselves, internally. It would surely boost the moral if artists can see their stuff while playing the game, instead of receiving screenshots from me. But to get something playable, you need a bunch of maps, at least 1 or 2 monsters, a player and game rules as well. Not too long ago, I started “Wiki tasks”. We have a private Wiki where I write about pretty much everything in the game. Story elements, how a certain section or room should look like, if the player can jump or not, and so on. Writing this is a dynamic process. Some elements have to be playtested first before you can confirm them, and in other situations you may want concept art first in order to decide what looks hot or not. It would be cool if artists would throw drawings and ideas in the mailbox on their own initiatives, but as said, you need to trigger them. So, each one or two weeks I point a few of the artists to a specific Wiki page. Then we discuss that thing, and eventually generate tasks out of it, such as making conceptual drawings. When done, I update the page with pictures and a final description. By basically forcing a task each 1 or 2 weeks, it keeps the discussion forum a bit alive. Although… you know, busy.
Global mapping. It looks like shit, but the goal is not to make something good looking (yet). The goal is to have a room you walk through, use for the game, and decorate later.

Making maps on a more global level means one or more guys should start making empty versions of the rooms, corridors, stairs, or whatever. In my experience, making the room itself isn’t that much work (even I can do it as long no super architectural tricks are used). What really eats time, is making the textures, decals, and props (furniture, devices, boxes, decorations, …). Even a simple room quickly generates dozens of props to make. So far I made a bunch of empty (demo) rooms, and then we tried to fill them. But filling takes so damn long, and the playground that is needed to actually play a game (= explore, get chased by a monster, solve a puzzle) keeps very small. What if we would focus a bit more on making this “playground” without beautifying them directly? It won’t produce nice screenshots for this blog, demo movies or magazines, but it does provide one of the most important ingredients for a playable game.

A nice side note I should make about that, is that making these maps isn’t so difficult… meaning that less experienced people could be “hired” for that. So far I filtered out quite a lot of offers, as I found the quality not good enough for T22. Sounds arrogant maybe, but I just don’t want ugly graphics, and many (beginning) artists fail at producing good textures. But obviously, that filters out most of the help I can get as well. Talented people have paid jobs, so they don’t notice T22, or only have very little time for it(you know… busy). Students on the other hand might be more motivated, as they want to learn a lot, and have more time. Maybe there texturing or hi-poly skills aren’t good enough yet, but those skills aren’t needed that much for making the raw maps.


2012 achievements
---------------------------------------------
I feel this post is getting too long again. And maybe too negative. Let’s finish with the good stuff from 2012.
• Found several 3D guys. Unfortunately, most have left or are inactive (you know… busy) but guys like Federico and Diego are doing their best.
• Promoted Federico to a lead artist
He has skills, teaches 3D students in his daily life, and keeps in touch with me. That’s what we need.
• Found an animator, Antonio
Now I got to make a FBX file importer (mainly to get the rig & animations). If you are interested in making a FBX importer DLL or writing export scripts for Maya / Blender, please contact me. I hate writing those.
• Found an UI artist, Pablo
• Cesar offered help on the audio
• David and Cesar produced several horror tracks recently
• Borja and Pablo joined and are making 2D artwork
• Started weekly Skype meetings
• Started the Wiki design discussions with Federico, Diego, Borja and Pablo
• Which delivered some nice plans for the T22 exterior, to name one thing
• Made most of the Demo3 and Demo4 environments, as well as some surrounding areas. Still got to beautify and fill them though.
• Made assets such as floor and carpet textures, an (movable) elevator, some furniture and closets, and a gun.
• Implemented an API, and entity system. Allows to freely program interactive objects such as devices, guns, monsters, doors, or whatever you can operate.
• Improved SSAO, added RLR reflections, improved HDR coloring & eye adaption and fixed a really annoying bug that has been making the graphics slightly blurry the past years.
• Implemented AVI (video) file support to make animated textures
• Implemented Compute Shader (OpenCL) support
• Implemented sort off Voxel Cone Tracing for GI lighting
• A whole lot of other little optimizations and bug fixes


All in all, it’s not that we slept a whole year. But for the show, only finished products matter. Hopefully we can finish what we started soon in 2013, and get the extra nitro injection to lift the project further where it should be.
My personal biggest achievement for this year(and to finish in 2013): realtime GI. Spend hundreds of fucking hours on it. The GI topic went by several times already, but so far it never resulted in a technique I realy liked. Too slow, too ugly, too blehh. But this time, I think we're finally getting somewhere. To be continued...

Saturday, December 22, 2012

Waiting room of Death

Inside a medium sized hospital room with large windows at its front, we were observing them from a small distance. A few rows of old couples were sitting next to each other in armchairs behind some sort of wooden cabinet with a monitor in it. Embracing each other, calmly waiting for Eternal Sleep gently lifting them nearly simultaneously out of their bodies. Nurses, young girls still with a life ahead of them, were quietly walking between the chairs. Hands on their backs, sometimes stopping for a second to look at the monitors or to make sure their patients were comfortable. Sweet whisper between couples would fade out, as they peacefully ended in the arms of who they loved. A strange serenity in this cold greyish looking room.

It was our turn. We took place in one the chairs. No idea what the monitor in front of us was really doing. It didn't matter. I embraced my girl, and closed my eyes. Realizing these would be my last words, I told her I loved her, and thanked her for all those years being with me. She didn't respond but firmly closed her arms around me. There was no need to talk really, we both know what we mean for each other.


I tried to let it just come. With my eyes closed and mind sort of cleared, I only felt the warmth of my girl. My friend, mother of my daughter, my better half. Who would go first? I still heard the soft footsteps of the nurses a bit, meaning we were still awake. As I waited and tried to get in tranquillity, in a numb state, I got scared. Without saying anything, I'm trying to say farewell to my girl. As well as trying to take leave of my own "me". Everything I remember, everything I learned this life, everything I am. It would vanish. But I don't want to...

It's getting completely silent and hollow in my head. Is Agnes still there? Not sure if I still feel her. This is it. They say dying is a peaceful process. Getting released from this world, entering a new reality. But I don't feel it that way. It's dark, and I'm scared. Where am I going to? Is there even a place to go to? Would I ever see Agnes again? Would I ever become as happy as I was in this life again? ... It's quiet and dark, my thoughts and concerns are stopping ... is this ... being dead?



I feel, with tears in my eyes, that I'm laying with Agnes, my girl, in my arms. In my bed. It still takes some seconds before I finally realise that I can open my eyes. Maybe not dead, but I'm in paradise.

Boys and girls, Be happy with what you have, love the ones around you, have peace with yourself. I wish you a warm Christmas.

Thursday, December 6, 2012

3D survival guide for starters #4, Light/AO/HeightMaps

Arh! A bit later than it should. And I don't have a good excuse really. Busy as usual, and forgot the clock. Or something. Doesn't matter, let's rush towards the last "Starters Guide" post, and smack your ears with a few more texturing techniques: Ambient Occlusion mapping and two old beautiful ladies, miss HeightMap and LightMap. The good news is that these are easier (to understand) than normalMapping. So lower your guard and relax.
Tears. One of my first lightMap generator results, including support for normalMapping.


HeightMaps
=====================================================
This is one of those ancient, multi-useful techniques that didn't die quite yet. If I were a 3D-programming teacher, I would first assign my students to make a terrain using heightMaps. And slap them with a ruler. The HeightMap image contains.... height, per pixel. White pixels indicate high regions, dark pixels stand for a low altitude. Basically the image below can be seen as a 3D image. The width and height stand for the X and Z axis. The pixel intensity for the Y(height) coordinate.

An old trick to render terrains was/is to create a grid of vertices. Like a lumberjack shirt. Throw this flat piece of lumberjack cloth over a heightMap, and it folds like a 3D terrain with hills and valleys. That's pretty much how games like Delta Force, Battlefield 1942 or Far Cry shaped their terrains. And asides from rendering, this image can be used to compute height at any given point for physics & collision detection as well. Pretty handy if you don't want your player fall through the floor. Another cool thing about heightMaps is that any idiot can easily alter the terrain. Just use a soft-brush in a painting program, and draw some smooth bumps. Load the image in your game, and voila, The Breasts of Sheba. In older games it was often possible to screw up the terrain by drawing in their map data files..
On the eight day, God created height.

Yet heightMaps aren't as popular anymore, as they have several issues. First, heightMaps provide only 1 height coordinate per grid point, So you can't make multi-store or overlapping geometry, such as a tunnel or a cliff hanging over your terrain. Second, your image resolution is limited as always. Let's say you draw a 1024 x 1024 image for a 4096 x 4096 meters (~16 km2) terrain. You can make your terrain grid as detailed as you want, but there is only height information for each (4096^2) / (1024^2) = 4^2 meters. Certainly not detailed enough to make a little crater or a manmade ditch. Games solved this problem a bit by chopping their terrains in a smaller bits, each with its own heightMap to allow more detail. More complex shapes such as steep cliffs, rocks, tunnels or manmade structures would be added on top as extra 3D models.

So with the ever increasing details and complexity, games switched to other methods. But don't just throw away your heightMaps yet! More advanced shader technologies made The Return of the HeightMap possible. To name a few:
- normalMap generators often use heightMaps to convert to a normalMap
- Entropy
- Parallax Mapping
Entropy? That's a tropical drink? Not really, but it is a cocktail for making your surfaces more realistic with rust, dirt, moss or layers caused by influences such as nature. Can't really explain it one word, but the way how a surface looks often depends on environmental factors. If you take a look at your old brick shed in the garden, you made notice that moss likes to settle in darker, moisture places. If it snows, snow will lay on the upwards facing parts of the bricks, not on the downside. Damaged parts are more common at the edges of a wall. Overall status and coloring of the wall may depend how much sun and rain it caught through the years. All these features are related to geological attributes, their location. by mixing textures and some Entropy logic, you can make your surfaces look more realistic by putting extra layers of rust, moss, worn paint, cracks, frost, or whatsoever on LOGICAL places.

If we want to blend in moss, we would like to know the darker parts of our geometry. And here, a heightMap can help a lot. For a brick wall or pavement, obviously the lower parts (dark pixels) usually indicate the gaps between the stones. So start with blending in moss at “low heights” first. The same trick can be used for fluids. If it rains, you first make the lowest parts look wet & reflective. This allows to gradually increase the water level without needing true 3D geometry. Pretty cool huh?
Like cheese grows between your toes first, this shader should blend in liquids at the lowest heights first.

Another technique you may have heard of, is Parallax mapping. I won't show you shader code or anything, but let's just tell what it does. As you can read in the third post, NormalMapping is a way to "fake" more detail and relief to your otherwise flat polygon surfaces. NormalMapping gives this illusion by altering the incoming light fluxes on a surface depending on the surface normals. However, if you look on a steep camera angle, you will notice the walls are still flat really. Parallax Mapping does some boob implants to fake things a bit further.

With parallax mapping, the surface still remains flat. But by using a heightMap, pixels can be shifted over the surface, depending on the viewing angle. This is cleverly done in such a way that it seems as if the surface has depth. Enhanced techniques even occlude pixels that are behind each other, or cast shadows internally. All thanks to heightMaps, that helps your shader tracing whether the eye or lamp can see/shine a pixel or not.
The lower part shows how the shifting works. Where you would normally see pixel X on a flat surface, now pixel Y gets shifted in front of it, because it's intersecting the view-ray.

As for implementation, it's just another image, using only 1 color channel. I usually stuff my height values into the normalMap. Although for expensive parallax mapping, it might be better in some cases to use a smaller separate image. Got to confirm that with the GPU performance guru's though.



LightMaps
=====================================================
Yes. Again lightMaps. I just like to write about them for some reason, and if you're not quite sure what these are, you should at least know about their existence. In post #2, I wrote how lighting works. Eh, basic lighting in a game I mean. To refresh your mind:
- For each light, find out which polygons, objects or pixels it *may* effect
- For each light, apply it on its polygons, objects, pixels, whatever
- Nowadays, shaders are mainly used to apply a light per pixel:
.....- Check the distance and directions between Lamp & Light
.....- Eventually check the direction between lamp, light & camera (for specular lighting)
.....- Eventually check with the help of shadowMaps if the pixel can be litten ("seen") by the lamp.
.....- Depending on these factors, compute light result, multiplied with the light color/intensity and pixel albedo/diffuse and specular properties.

That doesn't sound like too much work, but back in the old days, computers used leprechauns to calculate and carry bits from the processor to the monitor. If you were lucky, your computer had a "turbo!" button to increase the clock speed from 1 Hertz to 2 Hertz, whipping the little guys to run faster with those bits. Ok ok. The point is, computers weren't fast enough to perform the tasks above. Maybe for a few lights, but certainly not for a complex (indoor) scene, let alone realtime shadows. Oh, and neither did shaders exist btw. Although shaders aren;t 100% necessary to compute lighting, they sure made life easier, and the results 100 times better. Really.

Platform games like Super Mario World didn't use "real" lighting. At most some additive sprites or tricks to adjust the color-pallette to smoothly change the screencolors from bright to dark or something. Doom & Duke didn't give a crap about "real" lights either. How they made a flat building cast a shadow? Just by manually darkening polygons. When true 3D games like Quake or Halflife came, things got different though. id Software decided it was about time to make things a bit more realistic. Back then, gamers and reviewers were mostly impressed by the polygon techiques that allowed more complex environment architecture, and replaced the flat monster sprites and guns with real 3D models. But probably most of us didn't notice the complexity behind its lighting (me neither, I was still playing with barbies then). The lightMap was born, and still is an essential technique for game-engines now.

As said, computing light at a larger scale in 1996 was a no-go zone, let alone dealing with shadows or "GI", Global Illumination. Hence GI is still a pain in the ass 16 years later, but I'll get back on that later. Instead of blowing up the computer with an impossible task, they simply pre-computed the light in an "offline" process. In other words, while making their maps, a special tool would calculate the light affection on all (static) geometry. And since this happened offline, it didn't matter whether this calculation took 1 second or 1 day. Sorry Boss, the computer is processing a lightMap, can't do anything now. Anyway, the outcome of this process was an image (or maybe multiple) that contained the lighting radiance. Typically 1 pixel of that image represents a piece of surface (wall, floor, ceil, ...) of your level geometry. The actual color was the sum of all lights affecting that piece of surface. Notice that "light" is sort of abstract. You can receive light from a lamp or Sun, but also indirect light that bounced of another wall, or scatters from the sky/atmosphere. The huge advantage of light was -and still is!- that you could make it as advanced as you would like. Time was no factor.
A "LightMap Generator" basically does the following things (for you):
1- Make 1 or more relative big empty images (not too big back then though!)
2- Figure out how to map(unwrap) the static geometry of a level (static means non-moving, thus monsters, healthkits and decoration stuff excluded) onto those image(s).
3- Each wall, floor, stair, or whatsoever would gets its own unique little place on that image. Depending on the image resolution, size of the surface, and available space, a wall would get 10x3 pixels for example.
4- For each surface "patch" (pixel), check which light emitting things it can see. Lamps, sky, sun, emissive surfaces, radioactive goop. Anything that shines or bounces light.
5- Sum all the incoming light and average it. Store color in the image.
6- Eventually repeat step 4 and 5 a few times. Apply the lightMap from the previous iteration to simulate indirect lighting.
7- Store the image on disc

The hard part is inside unwrapping the scene geometry efficiently on a 2D canvas, and of course the lighting itself (step 4) itself. I won't go into it, and I should notice that many 3D programs or game engines have LightMap generators (or "light baking" tools), so you don't have to care about the internals. The point is that you understand the benefits of using a pre-generated image. Using a LightMap is a cheap as a crack-ho, while achieving realistic results (if the generator did a good job). In your game, all you have to do is:
* Store secundary texture coordinates in your level geometry. Those coordinates were generated by the lightMap tool in step 2/3.
* Render your geometry as usual, using the primary texture coordinates to map a diffuseTexture on your walls, floors, ceils, ...
* Read a pixel from the LightMap using your secundary texture coordinates. Multiply this color with your diffuseMap color. Done.

It isn't all flowers and sunshine though. If you read about the capabilities of a modern game engine, they are probably ashamed to admit they are still using LightMaps for some parts. Why is that? Well, the first and major problem is that you can't just rebuild your lightMap in case something changed in the scene. A wall collapsed, a door opened, the sun went down, you shot a lamp. Just some daily cases that would alter the light in your scene. If each change would issue a 2 minute during lightmap rebuild, your game becomes unplayable obviously. And there is more to complain. I wrote the word "static geometry" several times. Ok, but how about the dynamic objects? Monsters, barrels, boxes, guns, vehicles? Simple, you can't use a lightMap on them. In games, these objects would fall back on a simplified "overall" light color used inside a room or certain area. Yet another reason to dislike lightMaps, is that you can't naturally use normalMapping. Why? Because normalMapping needs to know from which direction the light came. We store an average incoming lightflux in our lightMaps, but we don't know where it came from. Yet, Valve (Halflife2) found a trick to achieve normalMapping sort of: they made 3 lightMaps instead of 1. Each map contained light coming from a global direction, so the surface pixels could vary between those 3 maps based on their (per pixel) normal. That technique was called "Radiosity NormalMapping" btw.

Wait, I have one more reason. The sharpness. As said, the generator needs to unwrap the entire scene on a 2D image canvas. Even when having a huge texture, your surfaces still won't get that many pixels. That explains blocky edged shadows, or light leaks. Even in more modern engines like UDK3.
No sir, I don't like it.

Well, 16 years ago we could chose between either nothing or a lightMap with its shortcomings. People never heard of normalMapping, and the blocky shadows weren't spotted between all other blocky low-poly pixelated shit. In other words, good-enough. But in the 21th century, LightMaps were pushed away. Faster hardware and shaders allowed to compute (direct) lighting realtime. So hybrids appeared. Partially lightMapped, partially realtime lights. Farcry is a good example of that. Doom3 even discarded lightMaps entirely...

To get punished remorseless. Yes, was less advanced but still looked more photo realistic. Thanks to their pre-baked cheap ass lightMaps. Despite a lot of great things, Doom3 (and Quake4) didn't exactly look realistic due their pitch black area's. Everything not directly litten by a lamp simply appeared black. Why did id do that?! Well, let's bring some nuance. Black area's won't quickly appear here on earth, because light always gets reflected by surfaces. See here, "Global Illumination". But the id guys simply couldn't compute GI back then. Not in realtime. Hell, we still can't properly do that, though promising techniques are closing in. So, they just forgot about the non-litten area's. In the sci-fi Mars setting, it was sort of acceptable. But their way of rendering things certainly wasn't useful for more realistic "earth" or outdoor scenes. It explains the lineup using idTech4 (Doom3 engine): Doom3, Quake4, Prey. All Sci-fi space games.
Due the complexity, most games store GI in lightMaps. But Doom3 didn’t use GI at all…

What they could have done, and should have done, is using a good old LightMap (they practically invented themselves with quake!) for the indirect and/or static lights. Of course, that also has shortcomings, but Hybrid engines like UDK (and I believe Crysis1 as well) the best of both, is still the way to go as we speak. Eventually, we will kill and burry LightMaps for once and all as soon as we get a realtime GI technique that is fast, accurate, scalable and beautifull at the same time. Implementing realtime GI is sort of a nerd-sadomachism. Telling you got realtime GI is cool, but the good old LightMap still looks better and is a billion times faster for sure. Although... Crassin's Voxel Cone Tracing comes close (but not without issues)...



Ambient Occlusion (AO) Maps
=====================================================
Now that you know everything about lightMaps, let's give you one more modern technique that spied on this old lightMap grandmother. "Ambient Occlusion", or "AO" maps. Just another texture that can be mapped on your world geometry like a lightMap. OR on a model like a gun or monster using in the same way you would map any other texture on them. AO tells how much your pixel is occluded from global, overall ambient light. It doesn't really look if & where there are lightsources really. It just checks how many surrounding surfaces are blocking *potentially* incoming light from any direction. It's pretty simple, put a pile of blocks on your table, and you will notice inner blocks get darker once you place more blocks around them. Because the outer blocks occlude light coming from a lamp, floor, ceiling, sky, flames, or whatever source.
Since GI or Ambient lighting is already a giant fake in most game engines, the unlitten parts of a scene (thus not directly affected by any light) often look flat. As if someone peed a single color all over your surfaces. NormalMapping doesn't work here either, since we don't have information about incoming light fluxes here (unless you do something like Valves Radiosity NormalMapping). Ambient Occlusion eases the pain a bit by darkening corners and gaps further, giving a less "flat" look, making things a bit more realistic.

Yes, AO is one of those half-fake lighting tricks. The result of what you see in reality is not based on "occlusion factors", but how light photons manage to reach a particular surface directly or by bouncing off on other surfaces. AO skips the complex math and only approximates the incoming light for a given piece of surface. Doing it correctly requires to let the surface send out a bunch of rays, and see if & where they collide. The more rays collide and the shorter their travel distances, the more occlusion. This is stored as a single, simple value. Typically AO maps are grayscaled whitish images, with darker parts in the gaps, corners and folds of your model. Since a single color channel is enough, you can store the Ambient Occlusion factor inside another image, such as the DiffuseMap.


AO construction
There are 3 ways of computing AO. We can do it realtime with SSAO (Screen Space Ambient Occlusion). The idea is the same, although SSAO is even faker as it relies on crude approximations by randomly sampling close neighbor pixels that may be occluding you. Testing correctly how much each pixel on your screen would be occluded -also by objects further away- would be too expensive. SSAO has the advantage that it will update dynamically on moving or animated objects, you don't have to pre-process anything. The disadvantage is that it tends to create artifacts such as a white or black "halo's" around objects, and it only works on short distances. Bad implementations of SSAO look more like a Edge Detector effect rather than "AO".

Another way is to compute it mathe-magically. This works if you only have 1 or 2 dominant lightsources (sun). You can "guess" how much light a pixel catches by looking at its surface normal (facing downwards = catch light from the floor, upwards = catch light from the sky/sun) and some environment information. Fake, but dynamic & cheap.

The third method, as described above, is by baking AO into a texture. Like you would bake light into a lightMap. Again, programs such as Maya or Max have tools to compute these images for you. Once made, you can apply AO on your world or objects at an extreme low cost. The downside, as with lightMaps, is that the AO doesn't adjust itself if the environment changes. Yet AO-maps are a bit more flexible than lightMaps (so don't confuse them!). Occlusion Factors are light independant. Directions or colors don't matter. So you can also bake a map on a model like a gun, and transport it through many rooms with different lighting setups. The AO in this gun image only contains its internal occlusion, like gaps, screws or concave parts being a bit darker. Also for static worlds, you can change the overall ambient color and multiply it with a pre-generated AO map. Not sure, but I can imagine a world like the Grand Theft Auto cities contain pre-computed AO-like maps (or stored per vertex maybe) that tell how much a piece of pavement gets occluded by surrounding buildings. Then multiply this occlusion factor with an overall ambient color(s) that depend on the daytime & weather.



Oops, the post got a bit long again. Well, didn't want to split up again and have yet another post. Want to write about Compute Shaders, so I had to finish this one. I hope it was readable and that you learned something!