Saturday, February 26, 2011

G.I. Joe

Time for another technical themed post, if it wasn't that I didn't touch much keys last week(s). At least, the fingers didn't produce Turbo Pascal. Way too busy finding a 21th century telephone for my girl. She might be friends with a programmer, but she really doesn't know how those damn electronic things work. Ha, there was an initiative in Europe that states that, asides having food, house, and a bed, Internet has to become another basic necessity of life... Well, I can safely say that High Speed Internet was the last concern in their house. Finding (and keeping) work to finance those other three needs is slightly more important in Poland...

Anyway, time to show her some next-century technology. Here, have a look at the PC! Internet is this button, you can also do Minesweeper but that sucks, and this button starts Microsoft Visual Studio. Ok ok, just type Google, then "shoes", + "cheap" please. No, left mouse button. Clicking once is enough. Here, see? No the computer isn't crazy, you just clicked that minus sign, making the screen disappear. No you don't stop by pressing the power button. Great stuff right, computers and Internet? Just when she finally has enough courage to start the computer with a cup of coffee.... Internet page not found. Router cannot be found. Windows has been recovered from a severe error(what to click?! Panic!). Google Chrome crashed, send report?

Or how about this one: "Windows delayed write disk failed". Reboot... Windows system.dll cannot be loaded. Windows cannot start. Don't know how she does it, but that happened about three times. For some reason, my Samsung HD103UJ disk suddenly starts failing to read/write. Well, suddenly... Maybe those little fingers from our petit gnome has something to do with it. Programming stuff like mad, then suddenly "zap", screen black, computer off. And I hear someone smiling half a meter below.

Samsung has a fix tool called shdiag.exe. But I simply don't trust that cursed piece of metal anymore. A (new!) disk crashing three times, fuck that. I'm going to enjoy torturing that thing with the hammer. But, tears up. A new HD has just arrived, and more important, I didn't lost any T22 code... Didn't reinstall everything yet, but when I saw those "write failure" balloons I already knew "make a back-up. NOW.". Which was a smart thing to do.


Indirect light... Without it, only the small spot on the floor would be litten.

So, that's why I didn't program much. But I just might tell a few details about the programming-work on T22 I'm lately doing though. All graphics programmers probably heard & saw Crysis new approach on realtime G.I. (Global Illumination, ambient light). Which basically means: when light falls on a surface, reflect("bounce") it further in all directions. And again, and again... The reason why most places, even during night, aren't completely black.

Easy does it, but while most visual phenomenons have been tricked by the game industry (reflecting water, dynamic shadows, smoke, refractions, god-rays, ...), doing realtime GI appeared to be one son of a bitch. Most games still aren't much further than Doom1 or Quake. Either by using a simple "ambient color" per area, and/or pre-baked lightMaps. Of course the lightMap quality is much better, and Occlusion Maps have been added to the weapon arsenal for somewhat more flexibility. But that's pretty much it.

For scenery that doesn't change much (switching lights, large moving objects, day/night cycle) lightMaps are usually sufficient. For bigger/outdoor scenes with only a few global lights (sun), an occlusionMap or even a simple fixed ambient color may do the trick because the (indirect) light doesn't vary much. So, developing GI wasn't that urgent, and the required computing power was simply not worth the deal. Of course, lot's of techniques have been tried in the background, but even on todays hardware, most of the tricks have serious drawbacks. Too slow, doesn't work with moving stuff, ugly, can't apply on bigger scenes, damn difficult to tweak it right, and so on. The bottom line is, a good old pre-fabricated still look better(and a hell lot faster).

But, stagnation means decline. Someone has to do the dirty job. And Crytek just did. Supergraphics with realtime GI, that even runs on a XBox 360 or PS3. And the Wii can render the skybox, without clouds. Problem solved? No, no, no. Even the brains at Crytek weren't able to tackle all problems. The solution is physically inaccurate, the detail is low (don't expect a bookcase model with small details having cool GI, think in terms of cubic meters). And it only does one bounce, which means a little light falling through a window still doesn't emmit the whole room). But from all solutions so far, it is probably the best one available. It handles huge area's, no pre-calculations at all, it works for dynamic objects, for sort of a fake it looks pretty good. And also important, the calculation time is relative low. The XBox/PS3 proves it.


Now I'm not exactly copying Crytek's approach for the T22 graphics (I'm too stupid for Spherical Harmonics anyway), but I took some of the components and combined it with my current realtime GI "solution". Yes, T22 does have realtime GI for 2 years already. But the quality is so bad that you hardly notice it. How it works? If you were a piece of wall, you might see other pieces of the opposite wall. Or a floor, ceiling, piano, skybox. Everything you see, could send (reflect diffuse) light towards you. You could raytrace to figure out the relations between all "surface patches", but you could also pre-calculate the relations. In my old solution, each patch has about 256 other patches to collect reflected light from. So what it did:

- Generate lightMap texture coordinates for a room.
- Each pixel on that map was converted to a patch (3D position, normal, surface emissive & reflectivity value)
- Calculate for each patches which 256 other patches it can "see" (by raycasting for example) when looking into it's normal direction.
- Store this data together with the room.

Since you quickly get hundred-thousands of data records, memory storage size grows rapidly. So, one of the biggest issues with this technique is the extreme low-resolution lightMaps to temper the size. Thinking about it now, I could have been better of with less relations per patch, but having more patches. Anyway, @runtime this data is used to spread(or collect if you prefer) direct light via the pre-calculated relations.

- Render a sector with lights(and shadowMaps) applied as a flat 2D texture
- Read the texture to the CPU.
- Let the CPU spread the light (used 4 bounces) with the pre-calc. patches
- Draw the results back into a texture (3 actually for simple normalMapping)
and use that as a "indirect-LightMap"

Asides having lightMaps being so small that the quality was awfull (just incorrect), it had more issues:

- The average room already required a few megabytes disk & memory space to store the patch data
- Can't use it on dynamic objects (well, you can, but not by nature)
- glReadPixels operation stalls the rendering pipeline. Speed drops.
- Realtime? Needing ~200 millisec to update a map (in a background thread) isn't exactly realtime. When having multiple rooms, you could clearly see the light being updated in steps.

However, I still like the idea of having pre-calculated info. It might result into somewhat more accurate results, and doing multiple bounces is far less of a problem than with the Crytek approach. The screenshots here for example use 4 bounces. So, I tried to take the best out of both worlds, resulting in my new "Ambi-Gather" technique. I won't reveal yet how it works (it still has to prove if it works at all!) but so far the progress is steady. This time it runs entirely on the GPU, it updates every cycle (thus real realtime), it requires far less memory, and it just looks better so far. Smarter, faster, bigger, better, Robocop 2.0, Terminator 3.0, Rambo 4.0.

Ok, give me some more time, and pray for the God's the hard-drive stays intact for a change.

Stupid light bouncing around all realistically… Horror scenes aren’t supposed to be bright. Luckily there are still parameters to adjust, telling how strong and what colors the indirect light should have for a specific lightsource, or for the entire scene. In this case I chosed acid green.

Tuesday, February 15, 2011

The Horror


Scooby Doo. We all like to torture ourselves with shocks and climaxes that makes you want to run away. Well, many of us. But what exactly is scary? A 13 year old Hentai ghost girl with pitch black hair? Mice and rats? Empty environments with a threatening tune? Burning zombies that puke out their own balls? Let's see how Tower22 tries to make you cry.


When discussing stuff like environments or characters with my fellows, it appears that describing something scary is utterly difficult. Not just because I lack English words to do so, but also because fear is a very personal thing. The nightmares / "things" that scared me most are probably normal or just vague to others. When I would describe it, the reaction would probably be "and then?" or "oh....".

But let's give it a shot anyway. Luckily I've seen quite some horror-things. Virtual drama's I mean. Still can't compare my "experience" with children who saw their family getting murdered or something of course. Nah, Liveleaks movies don't make me vomit, but when drama becomes reality I rather skip it. Taliban shot in half is not really my cup of tea. Guess that's normal... there is a difference between a vivid fantasy or a sick fantasy :) No, when it comes to inspiration I focus on games, movies, dreams, random thoughts, personal phobia's and history. Yes, our history is full of drama. From mythological stories to gas chambers.


Too bad I have some memory-leaks as well, so I forgot a lot of things. But if I had to list a few of the most scary movies/games, the first things that come in my mind are:
- The Shining
- Resident Evil 1 Remake
- Amnesia (only did the demo recently, but that was already pretty damn scary)

What these three titles have in common might be the lack of gore. I enjoy intestines, monsters and tomato sauce. But over-the-top horror usually doesn't scare me that much (although Doom3 or Silent Hill did a good job). No, you won't be disgusted by these titles when comparing it to Braindead, The Thing, Cannibal Holocaust (wtf) or Dead Space. Blood and extreme situations are fun, and I'll sure put them in T22 (more than in Resident Evil games for example). We won't turn our heads for guts, bizarre chambers, nudity or extreme violence. Hey, the upcoming Demo movie will have a pretty dirty scene! I wouldn't mind if the PEGI rating becomes so high that I'm officially too young to program my own game :) BUT, blood, boobs and gore are not the Core Business.

Not a beautiful scene, but I was playing around with some new textures Julio made. Making an uncomfortable environment is at least as important than angry monsters.

Tower 22 horror fundaments
Extreme shock moments are tasty, but they only work a few times. So we'll have to place them wisely. So what could make a game/movie really scary? As said, opinions differ. But for me the "can't place it" and "continuous terror" are the keywords. With "can't place it" I mean situations that are so strange that your brain gets a blue-screen when trying to reason. Dreams are usually a good example. Scenery that might look pretty normal still don't make sense, as they don't have to obey the rules of logic. Chambers are suddenly replaced with something else. Persons transform into something different even while you are talking to them. And of course, guns always change into Super-soakers where you have to shout the "Bam!" gunfire yourself (don't worry, we won't use that trick). Silent Hill is a good example. Environments could suddenly change, putting you from one nightmare deeper into another. The character dialogs are so strange that you trust nobody, and even common objects or rooms don't feel like they were designed by humans. A few other examples are The Cube (movie) and some of the Clive Barker books (famous from Hellraiser).


The "continuous terror" thing is the feeling that you could get in troubles any time. Well. In action games like F.E.A.R. you actually are in trouble continuously. But once you learned how to fight it, it becomes "normal" (until you see that damn little girl again). Older Resident Evil games or Amnesia are much slower paced. You won't be fighting 600 zombies per minute. But when it happens, you'll get in panic. RE controls are difficult and you always lack ammunition. Amnesia doesn't give the player a defence mechanism at all. So, with every step you take, you are 200% alert. Even the safe-rooms with calming music in RE feel like a dangerous place. That's why ghost movies with cliché scary moments still work good in common.

Last, maybe we should try to add some emotional drama’s? You know, player lost his 3 year old son, his father is also his mother, witnessing how your best buddy gets eaten... No. You’ll be all alone in Tower22. For one, to enhance the claustrophobia and loneliness feeling. Second, I don’t believe in game-emotions. Don’t know about you, but I’m just to sober to create a bond with a polygonal character. When Richard gets brutally eaten by a shark in Resident Evil, I’m only interested in the shotgun he leaves behind. Thank you.


Concept work
All right, put these ingredients in the blender, add some blood-sauce, and it should give a Tower22 meal. Should be easy for Jamie Oliver. But of course, making up a character that actually scares you (after seeing so many dirty monsters in other games/movies already) or an empty environment that makes you nervous just by the looks and sound, is darn difficult. So how should we accomplish this task? Every room, object, sound, music, creature or piece of story that goes into the game should get a "DTS" (Damn-that's-scary) label first. Considering that the content should make up a game that takes at least 8 hours (still short if you ask me), this will be an even more challenging task than programming an engine that can render blood decals.

Made within 13 minutes. And my digital drawing skills really aren't good! Making quick concepts first helps you creating an "inspiration source", and prevents wasting a lot of time on working out your first silly ideas in detail already. Would this curious guy make it into the game?

It all starts with making concepts. Don't work-out a super-cool one armed dude with a burned skin texture and a loose intestine right away. The more hours you put in a drawing or model, the harder it will get to take distance of it (and the more precious time wasted when it gets discarded). Maybe other team members don't really like it (but don't dare to say because you spend so much effort and love in it). Or you'll realise it's pretty stupid actually, a few days or weeks later.

Therefore, start with quick concept drawings or reference pictures. When a weird idea pops-up, write and draw it on the paper. If you browse accidentally on a cool photo, store it. If you saw a nice movie, list it. The more ideas, the better. Instead of trying to make the perfect bogeyman right from the start, collect as many as possible ideas. It doesn't matter if your concept drawing sucks or not. Since you didn't spend much time on it you can throw it away without hard feelings. Or just extract the good-parts from it. Like the pose, anatomy or other gadgets.

After making a new concept, let it rest. Let's see if you & the others still like it one week later. If so, you can work out the concept (or parts of it) to a more detailed version. Still keep it simple so adjustments can be done quickly. If your team still likes it after that, the concept can finally move to the production assembly line. Make a cool detailed poster as input material for the 3D modellers, etcetera. Sure, that's a lot of work. But at least your game (or whatever you are making) won't be littered by spoilsports. Remember, one dull character can seriously injure the tension of a horror game. And if the environments don't work, the game simply gets boring.


Would that be enough to make it scary? Who knows. But I always keep a saying from an American Vietnam veteran in mind. Don't know how it exactly went anymore, but he wrote:
"It were not the (few) actual enemy encounters that feared us. It was the constant knowledge that we *could* encounter the enemy, any time". Piece bro.


Same hall, but with our Red Alert friend. Can't see it here yet, but I added "mirrors" to the engine. You know, for high reflective marble floors, water or... mirrors.

Tuesday, February 8, 2011

Mega-Structures; Geometry Shaders #2

Two weeks ago I showed a stupid “electro bolt” example that can be done with geometry shaders. Uhm... Quake1 already had lighting bolt polygons as well, so do we really need geometry shaders for that? The answer ist 'nein' of course, but it was just to show how those programs look like. Yet, it took me some thinking before I could make up a real good reason to implement Geometry Shaders. While they already exists for a couple of years. Yet, here some more advanced applications. Bon appetit!


--- Tesselation tricks ---
The most interesting trick you can do with a GS, is generating new geometry. We’re still toying around with bumpMaps, and wacky parallax tricks to improve the illusion of relief in our surfaces. But what if we can really use hardcore triangles instead of normalMapping & Co? You know, instead of drawing bricks, you actually model them. Obviously, the problem is that this will cost millions and millions of triangles for a somewhat complex scene. Maybe billions when looking at a typical GTA scene.

You don’t need all that detail for distant stuff though. Who cares if there is a small hole in that brick wall 100 yards away? Level of Detail (LOD) is a common trick to reduce the polycount for terrain or objects that are further away from the camera. Either replace a high-detail mesh with a lower one, or dynamically reduce the quad division for heightMap/grid geometry such as terrains. Now with a GS, you could make brilliant advantage of that. Imagine a brick wall again, originally just modeled as a flat quad. That’s good enough normally, but when the camera comes closer, you could let a GS start dividing that quad 1,4,8,16,... For each new generated “sub” vertex, interpolate the texture coordinate, fetch the height, then recalculate the vertex 3D position (and normals). Since there is real relief in the wall now, you don’t need normalMaps for shading.

When using a 512x512 heightMap on a surface, you could effectively subdivide to 512x512 tiny quads. That is, ouch... 524.288 triangles. Hmmm, still too much for modern hardware, even if you only use it for geometry within a few meters from the camera. But hey, who knows. Last 15 years the maximum polycount has been multiplied with a factor 500. If not more.


--- Rendering CubeMaps ---
Aight, that still leaves us with nothing. But wait, there is another extremely handy trick that can be done with Geometry Shaders, making it worth to implement them: drawing to cubeMaps or 3D textures in a SINGLE pass. And yes, there are certainly reasons to want that these days. When you have to draw data in the background (shadowMap, deferred engine, etc.), you can render onto simple 2D target textures. But in some cases, you may also need to render onto cubeMaps, 2D array textures, or a 3D(volume) texture:
-.... cubeMaps : Reflections, (light?) probes, point shadowMaps
-.... 3D textures : Injecting points (ambient render techniques)
-.... 2D Array tex : Cascaded shadowMaps (sun)
In essence, all these "special" texture types are just collections of 2D textures. A 3D texture with 32 layers is... 32 2D textures stacked on each other. And a cubeMap is made of 6 textures. So when you like to render the environment into a cubeMap for reflections, you’ll need to render 6 times:
- Switch to cubeMap face –X  Render everything on the left
- Switch to cubeMap face +X  Render everything on the right
- Etecetera.

Now Geometry Shaders can not only generate extra output, it can also tell on which “layer” to render a primitive. So, instead of rendering the environment for a cubeMap 6 times, we can also render it just once, then let the geometry shader decide on which side(s) to put those primitives. The easiest (and probably fastest) way is just to duplicate each given primitive 6 times. Use 6 matrices you'd normally use to setup the camera before rendering the -X, +X, -Y, +Y, -Z or +Z side to transform the primitive in the proper perspective for each layer. In practice that means the primitive will fall outside the view-frustum in most cases, so no pixels will get harmed.

Wait a minute... You are still rendering everything 6 times in the end, right? Yes, in essence both tricks do the same. In case you can do 100% perfect culling in the traditional way, the same amount of triangles will get pushed. However, the GS way still has the advantage of:
- CPU only has to bother once. Don't have to swap the same textures/parameters multiple times.
- Culling is simpler (100% perfect culling may cost as well)
- Don't have to switch targets

The speed gain may not be gigantic, but there is more. What if you have a large amount of "probes" collecting... Ambient light or something. You could render on a 3D texture with many layers, and duplicate those triangles even more times. As long as the probes are working with the same set of input data (geometry pushed by the CPU), you can do it in a single call.

I'd love to place a working cubeMap Geometry Shader now. But sadly, my parrot died so I couldn’t try it yet. You’ll have to do with this pseudo code:

TRIANGLE void main(
AttribArray|float3| position : POSITION, // Coordinates
AttribArray|float2| texCoord0 : TEXCOORD0 // Custom stuff. Just pass that
uniform float4x4 cameraMatrix[6] // Camera matrices for each direction
){
for (int i=0; i<6; i++)
{
// Set cubeMap target face 0..5
flatAttrib( i : LAYER );

// Pass the geometry data, transformed
// into the current camera direction
for (int j=0; j<3; j++)
{
float3 transformedPos = mul( position[j], cameraMatrix[i] );
emitVertex( transformedPos : POSITION,
texcoord0 : TEXCOORD0 );
}
}
}

Not sure about the layer thing, and the matrix calculation is incomplete(what matrix, view and projection?), but this is basically it.


--- Rendering points into a 3D texture ---
Instead of playing with cubes, I was messing experimenting with a new realtime G.I. method. Just like Crytek's LPV method, it requires a massive amount of points to get inserted into a volume texture (multiples, actually). The position inside the 3Dtex would depend on the world-coordinate of the point, where the Y(height) tells on which layer to render. Easier said than done.

Just like with cubeMaps, you first have to tell on which layer you are going to put the upcoming stream of geometrical crap. That would be fine if that point-array was oredered on Y coordinates, but it's a complete random mess of points. So either we sort out that array first, or we render the whole damn thing again and again for each layer we activate:

Fbo.setRenderTarget( some3Dtexture );
For i:=0 to 31 do
....Fbo.setOutputLayer( i );
....bigDotArray.draw( again, sigh );

Eventually the CPU or shader has to test whether the given point belongs on the current layer. In other words, @#$@#%. Then there is a third option: "Bliting"... or is it "Blitting"? Anyway, OpenGL has this "glBlitFramebufferEXT" command. Some sort of advanced version of glTexCopy. Basically you copy a rectangle from buffer to another. So, what you can do is rendering everything on a flat simple texture first, then copy it into a 3D one:

-... Unfold the 3D texture to a simple 2D one. 32x32x32  1024 x 32
-... Render all stuff in one pass on the 2D target, and spread out over the width. Let the vertex-shader calculate an offset based on the Y or Z coordinate (what you like). So everything that belongs on the bottom layer gets an offset of 0. Layer[4] gets an offset of 4 x (1/32), etcetera.
-... Make a loop that “blits” a subrect from the 2D target onto the 3D texture layer:

Fbo.setRenderTarget( some3Dtexture );
For i:=0 to 31 do
....sourceRect := rect( i*32, 0, {width}32, {height}32 );
....targetRect := rect( 0, 0, {width}32, {height}32 );
....fbo.setOutputLayer(i)
....glBlitFramebufferEXT( sourceRect, targetRect, GL_COLOR_BUFFER_BIT , GL_NEAREST );

You still have to loop through that whole 3D texture, but at least you only have to draw that point array once, without worries. But it still doesn't win the Nobel prize for peace. But with a Geometry Shader, it becomes childplay. Just act like a fool that doesn’t know anything, bind the 3D texture as a renderTarget, and render that stupid array already. Your shaders remain the same as well, except for one thing:

POINT void main( AttribArray position : POSITION )
{
// Given positions are 3D texcoords!
// xy between 0 and 1, z is layer index.
// Thus when placing something on layer 4 when it has 16 layers in total, z=0.25
// Eventually you need to add a half "(1/16)*0.5" to Z
float4 p = position[0];

// Select 3D texture slice based on Z coordinate
flatAttrib( p.z : LAYER );
p.z = 0;
emitVertex( p : POSITION );
}

Done. Or wait. One last thing. You need to attach the target as a LAYERED texture. Otherwise all your points still get dumped on layer 0 only. There is a slight difference in setting up a FBO where you render to a specific layer, or a layered one where you can access all layers via the geometry shader. Too bad I don't have the code here right now, too bad it's 01:15 already, and too bad I'm too lazy to look for it now :p But just keep that in mind. I also had to disable the depthbuffer attachment (or use a "layered depthbuffer"?), otherwise it didn't work.

Any screenies then? You'll have to do with this bucket.

Wednesday, February 2, 2011

Charlie, check that manhole

Late late. Access to internet is limited due medieval techniques in this house. Internet only works in a dark corner since the wireless connection decided to give me a midfinger. And I'm too lazy to fix it.


How's the demo? Easy, it's planned somewhere in July for now. 3D content is made as we speak, but don't forget it takes some extra time now. 3D models will get more detail, and all the textures are made by ourselves. And... probably those guys have something better to do than spending 4 or 5 hours a day on T22 like me :p

To make it worse, artist Jesse lost his drawing-arm for a while. While biking, Apache Indians took him under fire, he hit a car-door, crashed into a ravine, the leaking fuel out of his bike exploded, and wild vultures attacked him. No, but that car-door really did hurt. So, his arm has to rest before he can put something on canvas again. Too bad, I really like to show you some art-teasers here once in a while. Oh well, at least he didn't explode or got eaten by vultures. Always look on the bright side of life. Anyway, sorry for the lack of fresh screenshots lately, we have to be a little bit patient.

Ok, 1 shot then. Realtime ambient lighting in progress. Normally this scene would be completely black, except for the spotlight in the corner. Now light from this source and the outside is bounced several times. Runs pretty damn fast (with 5 bounces) and at least it’s far more accurate than the previous “solution”, at a lower memory cost as well. However, the visual errors (due low resolution 3D textures) are still too big to accept. To be continued.

The good news then. "We" (still need a team-name but can’t come further than Doodle-Soft or Ape-Avalanche studio) are getting help from three extra men from yLAn Studios (Milan, Italy). Uh men... actually two of them are women. Yeah I didn't know that either, but game-producing girls do exists. Hey, seems that this nameless team is getting quite diverse! Americans, women, black & white, moms, dads, Mediterian.... Though, we still need a transsexual Viking in a wheelchair with Arab & Chinese parents to get this party complete :)

But seriously. I closed the "help-needed" doors a month ago. The only thing the team still needs at the moment, is a really-really good environment modeler. But before asking again, I guess it's wise to first finish a second demo, to use as fishing bait & filter. Really, before asking help, make sure you have something to show. So what are the boys & girls from Milan suddenly doing here then? Positive discrimination? No; to be honest I thought I was talking with a guy until Julio explained her name "Giulia" is pronounced the same as my own daughter's name "Julia". Doh! Like I said, you don't expect the other side to be responding on game-thingies. But probably I'm just old-fashioned.

Anyway, whenever someone offers help, it's always hard to make a verdict. Can they help you, based on just a few images or video's in their portfolio? And can you help them? Inviting everyone is easy, but if you don't have interesting tasks that would fit with their skills & interest, the motivation will be gone soon. Initially I had my doubts here. For one thing, three extra people on a relative small demo is like hiring 60 carpenters to construct a dog kennel in the front yard. Giulia, head of yLAn, explained they we're looking for another challenge. You know, sharpening skills and such. So I thought again about the offer, and suddenly an idea popped up... Prototyping! Say what?


T22 is an ambitious project, and we're full of ideas. Charles Dickens "Great Expectations". But deep in my heart I know it's impossible to release such a game, unless a full battalion of Russian Spetnaz is going to help. Luckily time is on our side, as no one is pushing to release this game within a few years. The worst thing that can happen is that I get hit by a dumptruck, or that I'll have to rebuild parts of the engine because the graphics are outdated again. But even so, we need more manpower (or girl-power for that matter) when it comes to environment mapping to start with.

The problem with making maps, levels, stages, environments or whatever you like to call it, is not just the amount of work to model & draw them. Each game has a set of requirements that a new piece of map has to meet. Examples:

* Halflife -->
Science-fiction, slightly horror, HL settings (City 17, Area 51, labs, prisons, factories, ...), linear gameplay, combat scenes (crates and pillars to take cover, flanking routes, plenty of stuff to destroy).

* Hidden & Dangerous (huh? Yes, the BEST and most BUGGY tactical WOII game ever. Period) -->
Multiple approaches to solve mission X (from Rambo to Stealth), Ze Germans, WOII style, sniping positions, enemy patrol routes to study, something to blow up.

* Zelda -->
Big outdoor map, secret locations (grotto’s, paths, dungeons, backdoor entrances), has to fit in the map section theme (town, dungeon, farm, forest, lake, desert, …). Occupied by either (simple) enemies or friendly creatures to talk with. Covers a couple of puzzles or items to find. Maybe a “vista” like a giant castle, big bridge, or floating structure. Fits in the overall Zelda fantasy theme; cartoonish, fun, yet a little bit scary sometimes = Fairy tales.

* Super Mario -->
Difficult jumps / climbing / swimming / ape stunts, 1 important theme for each level (giant boss, something huge to climb, King Buduba's missing key, pirate ship, ...), overall Mario fantasy setting (mushroom kingdom, snow, water, forest, castle), kid-friendly, nice colors.


Just feeled like posting a pic of this misunderstood ugly duck game. Played Operation Flashpoint, Arma, Rainbow Six, and several other tactical action games. But honestly, I found them pretty boring and empty. Hidden & Dangerous on the other hand was even though all its bugs and clumsy characters a brilliant game. Because every inch of those maps was thought about. Spend hours and hours crawling to reach the perfect sniping positions, felt one with the ground. Every little bump, wall or obstacle had its function; they weren’t just thrown in the maps as decoration.

And so has Tower22 its wish list. I dare to say the requirements are even more difficult than the average game here, as the “horror-vibe” is an extremely fragile thing. Most “scary” movies make me yawn. How to give an skyscraper varying interesting environments, but with respect to the main theme (horror, building, old, Soviet)? And even more difficult: how to guarantee that "horror-vibe"? The game has to shock or at least make a player feel uncomfortable for hours in a row. A horror game is considered as a success when the Ph value of someone’s underpants is -2 or lower after 2 hours of playing. But doing so is extremely difficult. One wrong enemy, music tune or bad chosen environment can change "scariness" into "laughter" or just "boredom". Humans adapt quickly to situations, so having a spooky dusty flat-corridor or a couple of intestines here and there isn't going to work forever.

* T22 -->
Overall unsafe feeling, here and there disgusting or bizarre things to keep varying, proper climax-speed, main theme (Soviet, building, old, strange), maze-like structure, hidden area’s. Make the environments suitable for puzzles and caretaker jobs. Environment has to suit the storyline. Maps support the type of enemy interaction (having multiple escape routes, hiding spots, ...).

So besides being awful, the environment also has to deliver support for puzzles and the type of enemy A.I. Kinda like Zelda or Metroid maps. But how to know if something will work out or not? Don't know about you, but my ideas usually start with some random thoughts, then I try to pour them into a few locations. With that in mind, I start drawing a 2D map. You know, the kind of floor plan you'll find in Resident Evil. But I'm always afraid the extra rooms/halls between the main locations do not support the type of game, or are just boring once visualized in a real 3D model. And the big problem with puzzle games is that you may need to make adjustments on location B, 200 kilometer away, when event A changes. Tower22 doesn't work with isolated levels that can be easily replaced if the Beta-Test team thinks it's shit. It is one huge map, connected with each other as one big happy inbreed family.


So, back on track, that's why concept art is important. The more "proven"(visualized) ideas, the more ammunition to fill a game. And that's also why Prototyping could work very well. Before wasting lots of time with putting your 3D/drawing guru's on a map section, first check if it works out at all. Maybe the map simply doesn't play very well. Maybe the chosen style just isn't scary or misplaced in the overall setting. Maybe puzzle X just doesn't make sense. Maybe the environment doesn't lend itself for a good old chase-scene. It's hard to judge anyway, since as a developer you'll be biased and stunned after seeing the same maps for about 3 billion times. Therefore passing ideas between separated “departments” can prevent tunnel-vision.

With people that can work "ahead", you can make pilots, try out ideas. Then show it to someone else who wasn't involved in the design process for a preview. If it has potential, the Prototype team can add more stuff and adjust the points of critique. Once the second version is approved, it can shoved to the Guru department. The 2D/3D experts will tune or rebuild the maps in full detail. Since Guru's are scarce, it's wise to let them focus on the most important tasks that are most likely going to be used in the game. So, in short, the flow would be:

Isn't that a nice cup-a-soup manager-model? On school we had millions of these meaningless models, so here's my legacy. Seems like an awful amount of steps, but keep in mind that the “departments” can work parallel pretty much here. While the experts are working out previous approved ideas, the prototypes can work out new ideas made by the creative minds (Dolphin’s with balls in an aquarium ;) ). Also, this flow isn’t applied for a single toilet-map, but bigger game sections.


It's like sketching. I red somewhere that the movie Star Wars had thousands and thousands of (quick) drawings for the creatures, spaceships, planet architecture and whatsoever in the Star Wars universe. 95% of those drawings probably never saw daylight, but in the end it's all about the "approved" five percent. This is what will make your movie or game. Sounds logical, but doing so requires time and patience. People quickly get in love with their own work, or just don't want to throw away all those hours of work. Meaning that their creation MUST be implemented somehow... even if it does suck. Part of prototyping work is accepting that your ideas are likely to get criticized, killed or adjusted. It only works well though if the Prototypers walk ahead, scouting the route. The whole purpose is to speed up the process by avoiding putting your experts on a wrong mission. Well, yLAn studio's is 3 men strong (Giulia, Mara, Jacob), so they can move a large amount of work, relatively.

Well, let's hope we can help each other out. yLAn by scouting the environments, we by giving them nice assignments to work on, a chance to gain some game-design experience.