Tuesday, March 27, 2012

UBO Wan Kenobi


Nothing to do with this entry, but some of the current programming progress: Screw up the repeating patterns by (vertex)painting details on your textures!

I thought it would be nice to have some nerd-talk between all those "Making of" posts. Tonight’s special guest: UBO's. No, UBO's are not a new shoe brand or Unidentified-Blowjob-Objects. Though "Uniform Buffer Objects" kinda sounds the same. If you make extensive use of shaders -and any modern game engine does- you may have noticed you are passing quite a lot of parameters to these shaders. For example, pretty much all shaders that involve lighting or reflections somehow, need to know the camera position and/or light properties such as color, falloff, position or its projection matrix in case of spotlights.

Tower22 has a few hundred different shader programs (build from ubershaders based on selected options). And the number is growing as the amount of options grows. Normally, I would need to pass parameters such as the camera position or light props again for each shader. Feels like a waste of time, since those values are the same for all shaders. Or how about shaders that need a large array of parameters? For example, you may want to do all lights at once in a single shader program. And “Vertex Skinning” (animating via the GPU) is also a technique that requires to know a big number of matrices (or quaternions) somehow. It's perfectly fine to pass all those parameters one by one before the rendering starts, but it's not the fastest way. Too bad a shader can't grab data from a fixed location somewhere in the videocard memory... Or can they...

If those issues sound familiar, UBO's can be your angel in darkness, the spray can in a stinky toilet. Not only they may give a (slight) performance boost, it also allows programming in a more natural way with structs and stuff. Instead of passing all those parameters individually, you make a data buffer. Which is basically just a block of vectors(float4) stored somewhere in the videocard memory. Pretty much the same idea as texture buffers or Vertex buffers:

1- Make a buffer
2- Fill it with data (once, or at the start of each renderCycle, or whenever an update is needed)
3- Define the same buffer in your shadercode. As an array or struct for example.
4- At creation, get the buffer-parameter from your shader and link it with the block you made earlier

In other words, the parameters have been moved from the CPU/RAM to the GPU/Videomem so we don't have to pass (and slowdown the pipeline) all the values anymore. Down below you can find a simple implementation, using OpenGL and Cg. When using GLSL, it works almost the same though. Probably even simpler.



Preparations
Before hitting your coding typewriter, make sure your drivers are prepared though. First of all, if you are using Cg (like me), download Cg 3.1 or higher. And if you are in love with OpenGL, make sure you are using OpenGL 2.x at least. AND, you may need to update your videocard drivers as well!! I tried and I tried, but making a buffer the pure OpenGL way caused crashes. Then I realized the laptop videocard comes from 2009. UBO's weren't even born back then, or at least still pooping their diapers. A videocard driver update did the magical fix in my case.

Unless you are using up-to-date headers, you may need to define some new OpenGL functions first. Well, I'm using Delphi so don't count on up-to-date functions. Ditto for the Cg libraries. Luckily adding new functions is pretty simple. If you can’t find a particular function, just search for it and the OpenGL docs tells you exactly how the function works, what it returns, what parameters to give, et cetera. Also good to know, the nVidia OpenGL10 SDK has some examples as well.



Example: lighting with structs
---------------------------------
Now let's make a practical example. Lights. Our space-ice-hockey game uses a number of simple pointlights, let's say 128 at max. Several shaders such as the environment and character shaders need those lights. We could define all lights as follow:

struct PointLight
{
float3 position;
float range; // Falloff distance
float3 diffuseColor;
};
// ! Don't use this code! There is a problem with the layout, I'll explain below

struct SceneLights
{
int lightCount;
PointLight[128] light;
} _sceneLights;

With such a struct, we could do the lighting inside a shader as follow:

for (int i=0; i < _sceneLights.lightCount; i++)
{
PointLight light = _sceneLights[i];

float attenuation = getAttenuation( pixelPos, light.position, light.range );
float3 diffuse = saturate( dot( pixelNormal,
normalize(light.position - pixelPos) ) );
diffuse *= light.diffuseColor.rgb * attenuation;
totalDiffuse.rgb += diffuse;
} // for i

And chaps, don't forget you can also still pass traditional parameters as lookup indices. This can be useful when rendering lots of stuff in a single breath, when using instancing for example.

... uniform int myID )
{
MyData d = dataArray[ myID ];


1 Making the buffer:
---------------------------------
Pretty cool huh? Step1 is to make a buffer. Simple stuff:

{ Create an empty buffer }
glGenBuffersARB( 1, @ubo.glHandle );
glBindBufferARB( GL_UNIFORM_BUFFER, ubo.glHandle );
err := glGetError;
if err <> 0 then
ubo.glHandle := 0; // Arh! Check your drivers matey

{ Size & Fill it (or pass NULL if you don't want to fill it yet) }
if isDynamic then
glBufferDataARB( GL_UNIFORM_BUFFER, byteSize, dataPtr , GL_DYNAMIC_DRAW_ARB )
else
glBufferDataARB( GL_UNIFORM_BUFFER, byteSize, dataPtr , GL_STATIC_DRAW_ARB );

glBindBufferARB( GL_UNIFORM_BUFFER, 0 ); // Detach
{ Make a Cg Buffer }
ubo.cgHandle := cgGLCreateBufferFromObject( cgContext, ubo.glHandle, CG_FALSE );


Some notes. First, you can define how your buffer will be used with the GL_DYNAMIC_DRAW_ARB parameters. I showed 2 variations, but there are more tastes, check the OpenGL documentation. Basically you need to decide how often the buffer will get updated? Only once? Each cycle? Even more?
* Another note, at the bottom I'm making a Cg specific variant of this buffer to use it with Cg shaders. If you use GLSL or something else, you can skip that line. Another note to self, Need to buy milk for tomo... wait, nevermind.


2 Filling the buffer & Layout:
---------------------------------
We already saw glBufferDataARB being used to fill the buffer. In this case it would be a pointer to struct(s) I showed earlier. You can update (sub)contents with

// GLSL
glBindBufferARB( GL_UNIFORM_BUFFER, ubo.glHandle );
glBufferDataARB( GL_UNIFORM_BUFFER, byteSize, dataPointer, );
glBufferSubDataARB( GL_UNIFORM_BUFFER, byteOffset, byteSize, dataPointer );
// Cg
cgSetBufferData( ubo.cgHandle, byteSize, dataPointer );
cgSetBufferSubData( ubo.cgHandle, byteOffset, byteSize, dataPointer );

There are some catches though. First of all, you can't just mix datatypes like I did (float3, float, int, ...). If you do, you may get weird results. Correct me if I'm wrong, but by default OpenGL expects the datablock to use the std140 layout for formatting. There are documents out there describing this. But if you take the lazy path like me, just make sure everything is using float4 (or float4x4 for matrices) or int4 types:

struct PointLight
{
float4 positionRange; // XYZ W = Falloff distance
float4 diffuseColor; // RGB A = not used
}; // 2 x 16 = 32 bytes

struct SceneLights
{
int4 lightCount; // X = pointlight count
PointLight[128] light;
} _sceneLights;
// 16 + 128 * 32 = 4.112 bytes


Yes, that may give some overhead (unused fields on the color and count variable). You don't have to use this way of formatting, but filling the buffer get's a whole lot more difficult then, as you need to know the offsets for each variable in that case. OpenGL and Cg have functions to calculate those btw.

Second rule, be aware there is a maximum size. For now, 4096 float4's to be more precise. That means I could define up to 2047 lights (don't forget the in4 lightCount variable) in the example above, as each pointlight takes 2 float4's. If that is not enough for you, you can bind multiple UBO's at the same time. You could make one UBO with all pointlights, another one with all spotlights, and so on.
* Oh, and Delphi boys, don't forget to pack your records ( TPointLight = packed record )!

3 Defining the structs in your shader
---------------------------------
Depends on the language you use, but it's pretty much the same as you did in C++, Delphi, or whatever it is you are using. Below a Cg example. The "BUFFER[x]" is an optional addition that tells Cg on which fixed "slot" the buffer is bound. Like textures, you can bind up to 32 (I think) UBO's at the same time. If you don’t care about the specific index, just type “: BUFFER;”.


4 Final step, connect shader parameters with the UBO's
---------------------------------
Not sure how it's done with GLSL, but with Cg you need to find the UBO parameter first, and pass the cgHandle we got before with cgGLCreateBufferFromObject(). For each program that uses UBO's:

var uboParamHandle : CGParameter;
begin
uboName := 'SceneLights';
uboParamHandle:= cgGetNamedProgramUniformBuffer( programHandle, pchar(uboName) );

I bet there are more ways to wire up the whole thing, but this is at least one of them. Anyway, now that we have the parameterHandle, we can pass the UBO:
cgSetUniformBufferParameter( uboParamHandle, cgBufferHandle );

As said, use the bufferHandle we got via cgGLCreateBufferFromObject(), thus not the one we got from OpenGL with glGenBuffers(). Unless you have crazy ideas, you only have to pass this value once by the way. So typically that would be at the start. Some final important notes (and maybe I'm doing something wrong):

* When defining large arrays in your shader, the compile time can get a LOT longer. It seems the entire array gets unwrapped.
* That's why I suggest to pre-compile the shaders and load those as long as you didn't make changes.
* Too bad cgGetNamedProgramUniformBuffer() does not seem to work with pre-compiled shaders... I think this is a bug in the Cg3.1 library, so I asked on the nVidia forums... with no result yet.

May the UBO be with you

Wednesday, March 21, 2012

Making of Radar demo #4: Gameland Immigration services


Off-topic, but nice to show nevertheless, some first-concepts for one of the many corridors...

Not too long after making a bunch of ideas, the first maps rolled out. And with a map I mean an empty room/corridor. Like buying a new house, you still have to argue where to place the sofa, and whether to use pink paint or clown wallpaper. But that's for later concern. First, a new map has to pass the Engine22 immigration services.

A finished model does not yet make a suitable game-map. This is because a "freestyle" 3D package like Max, ZBrush, Lightwave or Blender, does not have to follow any rules. Where games have to deal with physics/collisions, closed worlds, size boundaries, portals, spatial sorting and all other kinds of optimizations, a 3D package does not have to worry about performance, memory usage, or how the model will be used. Hence we're talking about making environments, but you could just as well model flying spaghetti.

That's why we have border watch; an importing tool that tests if the supplied model meets the requirements, scraps shit, and eventually adds additional data. This is common for pretty much every engine or game-tool. Though the bigger professional engines like UDK or Source often come with their own map-editors that force you into a certain way of modelling. Sorry if I bother you with ancient terms, but let's take Quark as an example. A long time ago, when the Halflife1-Saurus still wandered our planet, I used Quark to make some custom maps, just for fun. It can be a bit compared to Hammer, or the Unreal Editor.

Quark. 3 closed rooms, hollow primitive shapes -> Brushes.

Polycount
------------------------------------
In Quark, you couldn't just throw triangles wherever you pleased to do. No, you had to work with "brushes", sort of big hollow Lego blocks. Although I don't know the indepth details, using those blocks for rooms made sense, as they assured your world to:
- have a floor to walk on
- being closed (walls, floor, ceiling (or skybox))
- having a definition of "rooms" / "sectors". And this would help splitting up the world in logical sections, used for lightMaps, AI calculations, physics, culling, collision trees, and whatever was needed.

The downside however was that those blocks were a bit sturdy. It was hard, if not impossible sometimes, to make a complex (organic) shapes. If you needed small or complex details, you had to insert "props". Boxes, furniture, barrels, but also doors, lamps, pipes and railings are examples of props. That doesn't sound very handy, but keeping the maps as simple as possible makes sense, especially for older hardware. In many (older) games, props follow a bit different rules than the static world that carries them. Props don't reserve space in a lightMap, but use (simplified) lighting. Props use none or simplified collision hulls (resulting in slight less accurate but faster collision tests). Props can fade out and be hidden after X meters, or toggle to a lower-detailed version to win some polygons and speed. Props can be moved around, while static parts of the world such as the streets, buildings or walls can't. The player can be inside a room. but not inside a prop (unless it's a vehicle or something). Making a strict line between the static world and props is done in pretty much any game, although modern games manage to get things more flexible. No lightmaps for the static world, destructable worlds, accurate collision detection with props so you can climb and enter them, et cetera.

The maps (= static environment), using relative simple cube-like meshes like those Quark brushes, have a low polycount. Props on the other hand have a very high polycount compared to them. For example, the monster in the Radar Station demo uses almost more polygons than all radar station walls/floors/ceilings/pillars together. A computer model takes about 1.200 triangles in T22. A simple room with a window and a door only needs 400 triangles or so. Having a low polycount for the environments is useful for various reasons. First of all, the less polygons, the faster things render. You can fade-out a 1.2k triangle computer object after 30 meters or so. But you can't fade out the Empire State Building in a GTA-like game. Even not after a few thousand meters. Using a low-poly model for this (background) model is a solution.

Another reason to watch your polys, is collision testing. When you fire a bullet, the engine needs to check where it intersects a wall, or head. or... The most stupid thing you can do is looping through ALL triangles in the world, then for each check if the bullet intersects. You don't have to be a programmer to understand that testing this for thousands, maybe millions, of triangles is not such a good idea. For that reason, games often split up their worlds in invisible cubes (or another type of spatial grouping). With octrees for example: Imagine 1 huge cube around the entire world. Then you can split that cube into 8 sub-cubes. If your bullet flies somewhere in the upper-left-front cube, you can already skip thousands of triangles that are not (partially) inside this cube. Each cube can be divided again in 8 sub-sub-cubes, and so on. How many times you subdivide depends on you. You could for example keep dividing until either the cube-size is less than 1.0 M3, or when there is only 2 or less polygons intersecting. The idea is minimize the amount of triangle checks. Ray-versus-triangle checks are expensive, while sorting out in which sub-sub-(...)cube your bullet is, is cheap. So with the help of an octree, BSP, quadtree, or whatever, you can dive to the deepest level for a certain position in your world. Then test collisions with the triangles that are inside or intersect that octree-node.

Now, worlds that use a relative low number of big polygons, will have simple, thus easy-to-access, octrees as well. A lot of (small) triangles on the other hand will still require many checks, and/or an octree with many subdivisions (taking up some performance, but also memory).

One more very good reason to keep the worlds relative simple and to separate props, are lightMaps. Now Tower22 doesn't use LightMaps (though they might return). But many games did, and still do. When using lightMaps, each polygon needs it's own spot reserved somewhere in an image so you can store it's own incoming light values on those pixels. Images are not infinite though. A 512x512 image for example has "only" a quarter million pixels. The amount of space you need depends on the polygon size (large walls need more pixels than tiny stuff), but also on the polycount. If each polygon needs at least 1 pixel, you can't store lightdata for more than a quarter million polygons in a 512x512 image. For simple walls that are made of just a few triangles, this is not a problem. But a stupid sphere shaped doorknob, no matter how small, may already use 40 triangles. So you already need 40 pixels, or at least 6 if you pack them together based on the XYZ axis direction they're facing. A better idea would be to kick out that doorknob and use another (simplified) lighting method on it. Who will notice the difference anyway?


Importing maps in T22
------------------------------------
Back on topic. Although we do have a Map Editor, it's not suitable for actually constructing the mesh. We use the Map Editor to insert props, paint walls, hang-up the lamps, attach sounds, write scripts. And eventually do some small cosmetic surgion such as welding vertices, shifting the UV coordinates, or removing a polygon. The modeling itself still happens in another 3D program. Why reinvent the wheel?

You can't expect me to make an equal or even better modeling tool within a few months while the 3D Max or Blender boys are working years and years on it. No, way too little time. Instead the artist models the worlds (and props) in his/her favourite program, then exports it to OBJ files, an old simple industry standard. When importing these files for the first time (thus when adding a whole new map to the game), the OBJ first has to pass border watch though. And that's me & my loyal sidekick Birdman, uhrm, Lightwave.

Like explained before, a 3D modeling program or OBJ files, have to follow any rules. An OBJ file is not much more than a listing of coordinates (vertexdata) and the relation between them (polygons made of X vertices). But we need to know a bit more to make it suitable for a Tower22 map though. A map is more than a bunch of visual geometry. For example, we also need to define collision shapes, and eventual special trigger zones (water, lava, ladders, teleporters). That's why I supply the model with some more layers in Lightwave. One nifty feature in Lightwave is to work in layers, each containing its own data. That makes it easier to separate things while importing. Here an idea what a map is made of in Tower22:
1- LOD's (the visual geometry in several variants, from full to low detail)
2- Collision Geometry
3- Sound occlusion geomeetry
4- Reverbs
5- Triggers
6- Portals (to see other neighbor rooms defined in another map)
7- Cloth
8- Rails
9- ...Probably more to come...



1- LOD's
-----------
In the LOD's we basically the model as you see them in the game. But talking about levels-of-detail(LOD), ever noticed buildings to become suddenly more detailed when approaching them in a game like GTA? That's because they used multiple versions of each map-area. When looking on a distance, a simple textured cube for a flat could be sufficient. You don't see the small details such as normalMaps, antenna's, ornaments, signs or other architectonic quirks anyway. Same principle of Tower22. Each map has a HIGH, MEDIUM, LOW and ULTRALOW(optional) mesh. Depending on the distance, but also if you can see it or not (T22 = mainly indoor this lot's of occlusion), a version is picked for rendering.

Not only the geometry uses less triangles, the surfaces can also use simplified materials where all the special shader tricks such as normalMapping, parallax or entropy are disabled. This also helps loading the sectors smoothly in the background. First load the simple mesh, than the medium, et cetera. You can actually see this happening in GTA San Andreas when you drive faster than the world could load. Eventually ending up in a weird void with flying cars and pieces of pavement here and there.

Notice the PC version being more detailed (dig that XBox boys!)? That's most probably because the lower LOD variants fade-in earlier on the XBox to gain some speed.

2- Collision Geometry
-----------
Normally what you see is also what you can touch. But not always in gameland. Sometimes games define invisible walls to ensure the player-idiot not falling of a roof. Or vice-versa, they remove the collision for a piece of wall so you can jump through a painting like Super Mario 64 did. Very useful for ghost stuff or making secrets hallways. If your player has problems with stair-climbing physics, and believe me, climbing stairs is difficult, then it may help to make a crippled-friendly invisible variant of the stair. But mostly, the collision geometry is exactly the same as the Highest LOD in our case.


3- Sound occlusion geomeetry
-----------
With all the visual violence, we often forget our ears. But to make things sound realistic in an indoor game, sound needs to follow some physical rules as well... Like getting absorbed when travelling through a thick wall. Luckily FMOD allows you to define a 3D world, where you tell the occlusion factors for each polygon. So what I do is making a copy of the LOW LOD mesh (no need to let small crap block sound), and define their materials. A "medium concrete wall" for example may occlude 60% of the volume.

4- Reverbs
-----------
Here we define spherical zones that contain a "Reverbs", another cool sound feature. Reverbs are basically sound modifiers. Here is an experiment:

produce a fart in A:the bathroom B:a concert hall C:a cave D:under water
E:(optional) next to your girlfriend in the livingroom.

Maybe you didn't smell the difference, but you should have heard the difference. Due acoustics, each room sounds different. Reverbs add echoes, and do all kinds of crazy math I have no idea about. But it sounds cool. In Tower22, you can define such an effect for each room. But if needed, you can also do it more local. If a corner of the room has airduct metal around it, place a "airduct" reverb there. The Occlusion volume will not only block sound, but also the effect or reverbs by the way.


5- Triggers
-----------
Ever since the Atari, game worlds have special zones that give you bonus points, kill you, or warp you to a next level. Whenever the player (or something else) enters a zone, something happens. A practical example would be a water-volume or hazardous zone. If your player intersects a water volume, he has to toggle to swimming (or drowning) modus. In the map, I can define zones (which do not have to be cubes btw) and name them with a specific identifier + parameters in some cases. "GRAVITY 0 -9,8 0", "LADDER +Y", "TRIGGER eventX", et cetera. Basically these volumes help you driving the player state-machine.

6- Portals
-----------
I could tell a whole lot, or just refer you to portal culling. In short, we split up the entire Tower22 in sectors, which are typically rooms or corridor(pieces). So 1 lightwave file contains 1 sector. Usually rooms are connected via doors, holes, or windows. In this layer we define those portals via simple quads. The engine will figure out which neighbour sector is connected via this shape and link them up. If it fails to find anything, you can also do that manually. Or you can play Valve Portal / Prey tricks with it, by defining an entire different sector behind a portal.

7- Cloth
-----------
A special kind of geometry are surfaces that use cloth physics. Think about flags, sheets, curtains or Batman capes. You can hang a (highly) sub-divided shape, and define which vertices are attaced, and which are free to move (by gravity / wind / collisions).


8- Rails
-----------
A set of 2D lines to set out paths for camera's, animated sequences or nodes for AI routing. This way, you could for example make all possible routes a car could drive through your city, then pick a rail and let the car (globally) follow the nodes. Or use it for a Tour of Duty patrol.




Oops, forgot a triangle. Now what?
----------------------
That's quite a lot ey mate? Well, most maps only define the LOD's, physics, sounds and portals initially, which are mostly extracted parts or copies of the High mesh. Extra stuff like triggers or cloth can be imported later on. Once the Lightwave file has the required layers filled, it can be imported into the game via our own map-editor. This editor will then save it to our own map file format, which contains additional data such as ambient info, script, props, and other T22 specific stuff. From now on, we work with the T22 map file...

However, what to do if the artist wants to make a last-minute change? Forgot a polygon, move a vertex, resize a bit... Yeah we programmers don't expect such things to happen, but it does. The big downside of using importers/exporters or any other kind of extra steps in between, is the extra work the artist has to do in order to get his model working. Boring work, and bigger chances the artist forgot a certain step. If your importer does not catch those flaws, it could result in weird bugs, not working maps, frustration, and flying keyboards.

To reduce the chance on that, the Map Editor itself has a few tools to make simple adjustments. UV-maps can be remapped, textures can be changed, vertices can be weld, polygons can be removed. For operations that require to re-import the model anyway, it can be done locally. With that I mean you don't have to throw away and rebuild the entire map. A specific component of the map, such as the UV-coordinates for a few polygons, or the collision mesh, can be reloaded on their own, while preserving the rest of the map.

It's still not as user-friendly as Hammer, Quark, UDK Editor or a Crysis Sandbox, but hey, you got to make some compromises with a 0$ 1 man programming team. And that's why I import all the maps instead of the artist, protecting them from frustrations. Plus the urge of fixing something is bigger when you are confronted with the bugs yourself, rather than getting complain-mail ;) Next time: Making Props.

And there you have your map imported...burp. I'm always dissapointed after working X hours, then seeing an ugly (faulty) mesh like this. But hey, don't let the first impressions take you down! New born babies are ugly too ;)

Wednesday, March 7, 2012

Making of Radar demo #3: Planning and 3D Maps

Seems quite a lot new people joined the T22 blog past months! That's great, because my goal is not to make a cool game, but to beat Perez Hilton's blog of course. Nah, it's nice to have you here. And don't underestimate your own role on this; the more attention this project gets, the bigger the chance it succeeds.


Asset planning
---------------------------------------
Are you familiar with that phenomenon, lot's of ideas and discussion, but little productivity? Whether you make a game, or have a meeting at work about implementing X, people are enthusiast in making ideas and good in chatting generally. But as soon as you really need to pick up the worktools...

Not familiar with that? Good, that means you are a "worker" :) So we had ideas, floorplans and a few shitty sketches ready for this Radar Station. Now we just had to... build the whole damn thing. Talent and enthusiasm all over the place or not, the team needs to be instructed. If you expect all your "employees" to take initiative and fill your mailbox with finished goodies every day, you'll be disappointed. If you are in charge of a project, it's your duty to push, stimulate, feedback, and make clear goals. Artists are often ready and happy to do something, but they need your confirmation. Like a sniper needs a “fire” signal. Like the red telephone needs to ring before launching the missile.

Ok, so what I did were 2 things:
- Make a simple asset listing in a Notepad file
- Made an Asset Database where details, progression and author info could be stored per task in a database.


Making a global list is simple, but important. I knew which rooms we had to make, and had a rough idea of the room-contents. So for each room I made a list of sub-assets. Floor textures, wall concrete, dirt decals, a rusty bed model, a metal rack with stinky boxes, and so on. Asides visual assets, there were also audio assets. For example the sound of a computer or airvent. And of course programming tasks. If we want snow falling in, it also requires particle generators and shaders in the engine. For each week, I assigned a couple of assets to the team members. PersonA does a concrete and pavement texture, personB makes a barrel object, personC records goat sounds, and so on. And so I planned the next 10 weeks forwards.

Asset, but in a database. Properly managing a database takes extra effort and discipline though, so a simple textfile might be best to start with after all.


The magical factor of a successful planning is the feasibility. If you load too much on personA, you can redo his planning 2 weeks later on, because he already got behind. If you plan tasks for personB he doesn't like, the same will likely happen. If you leave no room for people being busy (work, sick, family, abducted...), same story. And don't forget more assets will automatically pop-up on the go. Unless you can look in the future, it's near to impossible to foresee each possible (sub)task. Be aware the amount of tasks will grow. If the planning is way off realism, people will discard it, and it has become a worthless piece of paper…

The easiest way to prevent this, is planning very little. But hey, that's not very stimulating either, and it will delay the release of course. It's ok to push people, as long as there is time to breath. Plan the unforeseen. What I did is making a "deadline" end November, though I already knew we most probably would need entire December as well. I also planned 1 empty week halfway, just to catch up things. On top, I "sinused" the weeks. 1 busy week with 3 assets for example, 1 easy week with only 1 asset, then a busy a week again, et cetera. if personA would be busy fighting with his girlfriend in week3 for example, he could move his tasks to the next "easy week". And of course, I asked everyone if the planning was doable.


The nice thing about plannings and deadlines... Once people agree with them, they have a commitment. If the team nicely accomplish their tasks according to the planning, the individual feels forced too. No one likes to let down the team of course. That may sound a bit like being forced to do boring homework for school, but don't forget that we assigned voluntarily to make a game. Shaking Turbo Pascal code or making 3D vases is our hobby... uhm, if the pace goes well, and progress can be shown each week, it's fun to put effort on it. The demo movie and hopefully positive reactions are the reward. And an actual game-release, making filthy money, get famous, wear lady Gaga masks, sniff Heroin and be on the set with 5 naked girls in a new Snoop Dogg track (or just getting a contract at Nintendo) is the pot of gold at the end of the rainbow.


Mapping
---------------------------------------
All right, time to actually do something. So I called Morgan Freeman for voice acting, Sean Connery for texture drawing, Neil Armstrong for the sound and Madonna for the 3D props. They were busy though, so I had to rely on my few team members. Julio on the drums (textures & drawing), Sergi on the Guitar (3D objects to fill the rooms with), Brian on the flute (Website), David on the synthesizer (audio). Later on Robert on the contrabass (monster), and me on the Xylophone (programming & maps).

There are million ways to accomplish something like a Radar Station. Do the textures/props/audio all separate, parallel. Hire a couple Polish construction workers, or use the empty rooms as a framework. I chose the latter. Once you have rooms, you got something to show. From that point, it's easier for others to imagine how a room could look with textures, objects, or how it should sound (see pic above). As a bonus, their assets can be directly tested within that room, inside the game or editor. Don't underestimate the effect of direct feedback! If one makes a chair that will be used a few months later, it's difficult to tell if the chair really fits there. And getting back on your old models months later sucks. Just bring it on and get over with it at once.


Baking the mesh
So, I made all the empty rooms first. Normally mapping is not my task, but since this bunker has relative simple shapes, and the lack of manpower, I took the job. Plus I got pretty fast with Lightwave, doing tricks with cubes for years and years;) Nevertheless, if you have experience with 3D modelers you'd better close your eyes the next few paragraphs, or you may get a heart attack.

It's pretty simple really. First I plot the floor points, then connect them as 1 complex polygon. Then grab that polygon and extrude it into the height. Flip the faces inwards, voila, a basic room. Repeat this for all rooms in the scene so have a bunch of cube-like rooms. Done… Wait, they need doors and windows. In Lightwave, you can toggle to a second layer and create cubes(or another shape) between the rooms. With Boolean operations like "Subtract", "Add" or "Union" you can use these second shapes to drill holes in walls on the first layer, or to connect the meshes.

It already starts looking like a structure now. But to make the rooms more interesting, it needs some Gaudi twinkles. Pillars, skirting, windowsills, play with heights and stairs, pipes, et cetera. The knife-cutting, extrude, and boolean tools are your friends in Lightwave.

Got to mention that the rooms are not equipped with fine details initially. Stuff like wall-sockets, electro cables, doorknobs, broken bricks, furniture, metal railings, doors or lamps are added later as separate 3D objects, or as a special detail layer in the map. The map itself has a relative low polycount to optimize things like collision detection or background rendering passes that don't need the little details. Then again, surfaces do get subdivided into smaller square tiles, and that quickly adds a lot of triangles again. Why is that? Because we store data per vertex for "terrain drawing" and static occlusion values for simple ambient lighting. The more vertices you have, the more detailed you can bring in that. Plus as a general rule, you shouldn't make huge or very thin & long polygons anyway, as they can cause slight interpolation errors. If the normal on vertex X is a bit bended, the connected polygon will show that off in its lighting. If that polygons is huge, you can clearly see the triangular structure of the map, which is not good usually.

A folded, but flat quad... looking like shit due bad triangulation.

UV
Last but not least are the UV coordinates. No, that's not a term for sunburn degrees. These are (secundary) coordinates that tell how a picture is wrapped on your walls, floors, or whatever surface. With these coordinates you can rotate, shift, stretch or deform the way how a picture appears on your walls. Doing UV's is simple with Lightwave... But doing them exactly right takes time, and is as exciting as watching Karate Kid 8 times in a row. Very common errors each gamer probably has seen in the past 14 3D years:

- Scaled too small. Do I need glasses, or do I see the same crack 10 times over and over again? See floor in pic below.
- Scaled too big. The wall from nearby looks like a blurry N64 texture. See right cube in pic.
- Wall shifts. The painted stripe is 3 cm higher or lower on the neighbor wall. See red arrow.
- Stone tiles do not exactly fit in the room, a small strip still appears next to a wall because the scale and/or position was a tiny bitt off.
- Truck-ran-over-cat-stretched-stripes.
Shit happens. Sometimes the artist forgets to define the UV coordinates for a corner.
See center cube.


Honey, who stretched the kids?

But because I found UV-mapping just too boring, the Tower22 map editor has some paint tools that helps automatically scaling and shifting which saves a lot of time. So, no, I didn’t do any UV-mapping at all in Lightwave. Except for the complex shapes such as pipes with curves. That's because the Tower22 UV tools are too stupid to handle those (for now). And that's also the reason you may see some badly mapped polygons in the movie here and there...


Enough for now. Next time the last step of map-building, importing the maps it into the game!

Close, but no cigar