Wednesday, October 31, 2012

3D survival guide for starters #3, NormalMapping

Hey, fatman! Just heard a funny fact that makes you want to buy a horrorgame like T22, if you're too heavy that is. Seems watching scary movies or playing horror games is pretty healthy: watching The Shining burned more than 180 kilocalories! Because of fear/stress, you eat less and burn more. So go figure... playing T22 on a home-trainer screen should be even more tense than Tony Little target training or Zumba dancing.


Normals
-----------------------
Enough crap, lets continue these starters guide posts, explaining the world famous "bumpMap". Or I should say "normalMap", because that is what the texture really contains: normals. But what exactly is this weird purple-blue looking texture? So far we showed several techniques to affect surface properties per pixel:
albedo(diffuse) for Diffuse lighting
specularity for specular lighting
shininess for specular spreading
emissive for making pixels self-emitting

Another crucial attribute is the surface "normal". It’s a mathematical term you may remember from school. Or not. “A line or vector is called a normal if its perpendicular to another line/vector/object”. Basically it tells in which direction a piece of surface is heading to. In 2D space we have 2-axis’s, X(horizontal) and Y(vertical). In 3D space we have -surprise- 3 axis’s: X, Y and Z. Your floor is facing upwards (I hope so at least), your ceiling normal is pointing downwards. If the Y axis would mean "vertical", the floor-normal would be {x:0, y:+1, z:0}, the ceiling-normal {x:0, y:-1, z:0}. And then your walls would point somewhere to the -X, +X, -Z or +Z direction. In a 3D model, each triangle has a certain direction. Or to be more accurate, each vertex stores a normal, eventually a bit bended towards its neighbor to get a smoother transition. That is called "smoothing" by the way.

As explained in part 1, older engines/games did all the lighting math per vertex. I also showed some shader math to calculate lighting in part 2, but let's clarify a bit. If you shine a flashlight on a wall, the backside of that wall won't be affected (unless you have a Death Star laser beam). That's because the normal of the backside isn't facing towards your flashlight. If you shine light on a cylinder shape, the front will light up the most, the cylinder sides will gradually fade away as their normals face away from the flashlight further and further. This makes sense, as the surface there would catch less light photons. Lambertian (cosine) lighting is often used in (game)shaders to simulate this behavior:

NormalMapping
-----------------------
Since we have relative little vertices, per-vertex lighting is a rapid way to compute the scene lighting, but it doesn't allow to have a lot of geometrical detail/variation on your surfaces, unless you tessellate them (means dividing them into a LOT of tiny triangles). So, a football field made of a single quad (=2 triangles, 4 corner vertices) would only calculate the lighting at 4 points and interpolate the lighting between them to get a smooth gradient. In this case, the entire soccer field would be more or less equally litten.

However, surfaces are rarely perfectly flat. A football field contains bumps, grass, sand chunks, holes, et cetera. Same thing with brick walls, or wood planks. Even a vinyl floor still may have some bumps or damaged spots. We could tessellate the 3D geometry, but we would need millions of triangles even for a small room to get sufficient detail. C'est pas possible. That's french for "screw it".

This is why old games drew shading-relief into the (diffuse)textures. However, this is not really correct of course, as shadows depend on the lightsource(s) locations. If you move a light from up to down, the shades on a wall should change as well. Nope, we needed something else... Hey! If we can vary diffuse, specularity and emissive attributes per pixel, then why not vary the normals as well?! Excellent thinking chief. "BumpMapping" was born, and being implemented in various ways. The winning solution was "Dot3 normalMapping". As usual, we make yet another image, where each pixel contains a normal. Which is why the correct name is "normalMap" (not "bumpMap"). So instead of having a normal per vertex only, we now have a normal for each pixel (at least, depends a bit on the image resolution of course). So, for a brick wall the parts that face downwards will encode a normal value into this image, causing the pixel to catch less light coming from above.

Obviously, you can't draw multiple lighting situations in a diffuseMap. With normalMapping, this problem is fixed though. Below is a part of the brick normalMap texture:


Image colors
Now let's explain the weird colors you see in a typical normalMap. We start with a little lesson about how images are build up. Common image formats such as BMP, PNG or TGA have 3 or 4 color channels: Red, Green, Blue, and sometimes "Alpha" which is often used for transparency or masking. Each color channel is made of a byte (= 8 bits = 256 different variations possible), so an RGB image would store each pixel with 8 + 8 + 8 = 24 bits, meaning you have 256*256*256 = 16.973.824 different colors.

Notice that some image formats support higher color depths. For example, if each color channel gets 2 bytes (16 bit) instead of 1, the image size would be twice as big, and the color palette would give 281.474.976.710.656 possibilities. Now having so many colors won't be useful (yet), as our current monitors only support 16 million colors anyway. Although "HDR" monitors may not be that far away anymore. Anyway, you may think images are used to store colors, but you can also use it to store vectors, heights, or basically any other numeric data. Those 24 bits could just as well represent a number. Or 3 numbers in the case of normalMapping. We "abuse" the
Red color = X axis value
Green color = Y axis value
Blue color = Z axis value
About directional vectors such as these normals: these are noted as "normalized" vectors (also called "unit vectors"). That means each axis value is somewhere between -1 and +1. And the length of the vector must be exactly 1. If the length was shorter or longer, the vector isn't normalized (a common newbie mistakes when writing light shaders).

You can forget about "normalized" vectors for now, but it’s important you understand how such a vector value is converted to a RGB color value. We need to convert each axis value (-1..+1) to a color channel (0..255) value. This is because we had 1 byte(8 bits) per channel, meaning the value can be 0 to 255. Well, that is not so difficult:
((axisValue + 1) / 2) * 255
((-1 +1) / 2) * 255 = 0 // if axis value was -1
(( 0 +1) / 2) * 255 = 128 // if axis value was 0
((+1 +1) / 2) * 255 = 255 // if axis value was +1
Do the same trick for all 3 channels (XYZ to RGB):
color.rgb = ((axisValue.xyz) + {1.1.1}) / {2,2,2} ) * {255,255,255}
Let's take some examples. Your floor, pointing upwards would have {x:0, y:+1, z:0} as a normal. When converting that to a color, it becomes
red = ((-1 + x:0) / 2) * 255 = 128
green = ((-1 + y:0) / 2) * 255 = 255
blue = ((-1 + z:0) / 2) * 255 = 128
A bright greenish value (note red and blue are halfgray values, not 0). If your surface would face in the +X direction, the value would be bright red. If it would face in the -X direction, it would be a dark greenblue value (no red at all).



Obviously, normalMaps aren't hand-drawn with MS Paint. Not that its impossible, but it would mean you'll have to calculate the normal for each pixel. That's why we have normalMap generators, software that generates (approximates) normals out of a height-image, or by comparing a super detailed (high-poly) model with a lower detailed (game) model. The high-poly model contains all the details such as scratches, skin bumps or wood reliefs in a real 3D model. Usually programs like ZBrush or Mudbox are used to create those super high polygon models. Since we can't use those models in the game, we make a low-poly variant of it. When we compare the shapes, a normalMap can be extracted from that. Pretty awesome right?

Either way, once you have a normalMap, shaders can read these normals. Of course, they have to convert the "encoded" color back to a directional vector:
pixel.rgb  = textureRead( normalMap, texcoords );
// ! note that texture reads in a shader give colors 
//   in the (0..1) range instead of (0..255)
//   So we only have to convert from (0..1) to (-1..+1)
normal.xyz = {2,2,2} * pixel.rgb - {1,1,1} 

You are not ill, you're looking at a 3D scene drawings its normals. This is often used for debugging, checking if the normals are right. If you paid attention, you should know by now that greenish pixels are facing upwards, reddish into the +X , and blueish in the +Z direction. You can also see the normals vary a lot per pixel, leading to more detailed lighting, Thanks to normalMap textures.


Why is the ocean blue, and why are normalMaps purble-blue? > Tangent Space
--------------------
All clear so far? Good job. If not, have some coffee, play a game, eat a steak, and read again. Or just skip if you don’t give a damn about further implementations of normalMapping. With the knowledge so far, a wood floor normalMap would be a greenish texture mainly, as most normals point upwards. Only the edges or big woodnerves would be reddish or blueish / darker. However, it seems all those normalMaps appear purple/blueish (see bricks above). Why is that? The problem is that we don't always know the absolute “world” normals at forehand. What if we want to apply the same floor texture on the ceiling? We would have to invert the Y value in a different texture. If we wouldn't do that, the lighting becomes incorrect, as the shader still thinks the ceiling is facing upwards instead downwards. And how about animated objects? They can rotate, tumble and animate in all kinds of ways. There is an infinite number of possible directions a polygon can take.

Instead of drawing million different normalMaps, we make a "tangentSpace normalMap". What you see in these images, is the deviation compared to the global triangle normal. Huh? How to explain this easy… All colors with a value of {128,128,255} –yes, that blue purple color- indicate a normal of {0,0,+1}. This means the normal is exactly the same as its parent triangle(vertex) normal. As soon as the color starts “bending” a bit (less blue, more or less green/red), the normal bends away from its parent triangle. If you look at this bricks, you’ll see the parts facing forward (+Z direction) along with the wall, are blue-purple. The edges of the bricks and rough parts start showing other colors.


Ok, I know there are better explanations that also explain the math. What’s important is that we can write shaders that do tangent normalMapping. In combination with these textures, we can paste our normalMaps on all surfaces, no matter what direction they face. You could rotate a barrel or put your floor upside down; the differences between the per-pixel normals and their carrier triangle-normals will remain the same.

It’s also important to understand that these normals aren’t in “World-Space”. In absolute coordinates, “downwards” would be towards the core of the earth. But “down” (-Y = less green color) in a tangentSpace normalMap doesn’t have to mean the pixel will actually look down. Just flip or rotate the texture on your wall, or put it on a ceiling. –Y would point into a different direction. A common mistake is to forget this, and try to compare the “lightVector” (see pics above or previous post) calculated in world-space with this tangentSpace normal value. To fix this problem, you either have to convert the tangentSpace normal to worldNormals first (that’s what I did in the monster scene image above), or you convert your lightVector / cameraVectors to tangentspace before comparing them with the normal to compute diffuse/specular lighting.

All in all, tangent NormalMapping requires 3 things:
• A tangentSpace normalMap texture
• A shader that converts vectors to tangentspace OR normals to World space
• Pre-calculated Tangents and eventually BiTangents in your 3D geometry, required to convert from one space to another.

Explaining how this is done is a bit out of scope of this post, there are plenty of code demo’s out there. Just remember if your lighting is wrong, you are very likely comparing apples with oranges. Or worldspace with tangentspace.
When putting it all together, things start to make sense. Here you can see the different attributes of pixels. Diffuse(albedo) colors, specularity, normals. And of course to what kind of lighting results they lead. All that data is stuffed in several textures and applied on our objects, walls, monsters, and so on.


NormalMaps conclusion
-----------------------------------
NormalMaps are widely used, and they will stick around for a while I suppose. They won't be needed anymore if we can achieve real 3D geometry accurate enough to contain all those little details. Hardware tesselation is promising, but still too expensive to use on a wide scale. I should also mention that normalMaps have limitations as well. First of all, the actual 3D shape still remains flat. So when shining a light on your brick wall, or whatever surface, you'll see the shading nicely adapt. But when looking from aside, the wall is still as flat as a breastless woman. Also, the pixels that face away from the lightsources will shade themselves, but won't cast shadows on their neighbors. So normalMapping only affects the texture partially. Internal shadow casting is possible, but requires some more techniques. See heightMaps.

So what I'm saying, normalMaps certainly aren't perfect. But with the lack of better (realtime) techniques, you'd better get yourself familiar with them for now.


I thought this would be the last post of these "series", but I still have a few more techniques in my sleeves. But this post is already big and difficult enough so let's stop here before heads will start exploding. Don't worry, I'll finish next time with a shorter, and FAR easier post ;)

Tuesday, October 16, 2012

3D survival guide for starters #2

* Note. If any of you tried to contact via the T22 website "Contact page", those mails never arrived due some security doodle. It should work again. Apologies!!

Let's continue this "beginner graphics tour". Again, if you made shaders or a lot of game-textures before, you can skip. But keep hanging if you want to know more about how (basic) lighting works in games, and what kind of textures can be used to improve detail.

Last time we discussed how the classic "texture" is applied on objects. Basically it tells which colors appear on an object, wall, or whatsoever. To add a bit of realism, that color can be multiplied with the color from lightsources (sun, flashlight, lamps, ...) that affect this pixel. Being affected means the lightrays can "see"/reach this pixel directly or indirectly (via a bounce on another surface). A very crude lighting formula would go:
...resultColor = texturePixelColor * (light1Color + light2Color + ... + lightNcolor)

Thus we sum up all incoming light, and multiply it with the surface color (usually fetched from a texture). If there are no lights affecting the surface, the result appears black. Or we can use an ambient color:
...resultColor = texturePixelColor * (light1Color + light2Color + .... + lightNcolor + ambientColor)

This formula is simpl, but checking if & how much a light affects a surface is a whole different story though. It depends on distance, angles, surroundings, obstacles casting shadows, and so on. That's why old games usually had all this complex stuff pre-calculated in a so called lightMap, just another texture being wrapped over the scene and multiplied with the surface textures. The big problem of having it all pre-computed is that we can't change a lighgt. So, old games had static lights (= not moving or changing colors), no shadows on moving objects, and/or only very few physically incorrect lights that would be updated realtime.

Shadows? Anyone? The little lighting around the lamps are pre-baked in low resolution lightMaps. Halflife 1 was running on an enhanced Quake2 engine btw.

When shaders introduced themselves in the early 21th century, we programmers (and artists) suddenly had to approach the lighting on a more physical correct way. Shaders by the way are small programs running on the videocard that calculate where to place a vertex, or how a certain pixel on the screen should look(color/value/transparency). With more memory available and videocards being able to combine multiple textures in a flexible programmable way, a lot of new techniques popped up. Well, not always new really. The famous "bumpMapping" for example was already invented 1978 or something. But computers simply didn't have the power to do anything with it (realtime), although special 3D rendering software may have used it already. Think about Toy Story (1995).

Anyway, it caused several new textures to be made by the artists, and also forced us to call our textures in a bit more physical-correct way. The good old "texture" on objects/walls/floors suddenly became the "diffuseTexture", or "albedoTexture". Or "diffuseMap" / "albedoMap". Why those difficult words? Because the pixels inside those textures actually represent how the diffuse light should reflect, or the "albedo property" of your material. Don't worry, I'll keep it simple. Just try to remember your physics teacher telling about light scattering. No?


The art of lighting

When lightrays (photons) fall on a surface, a couple of things happen:
- Absorption:
A part of the photons gets absorbed (black materials absorb most).
- Specular reflection:
Another portion reflects on the surface "normal". In the pic, the normal is pointing upwards. This is called "specular lighting". The "specularity property" tells how much a material reflects. Typically hard polished surfaces like metals, plastics or mirrors have a relative high specularity.
- Diffuse scattering:
The rest gets scattered in all directions over the surface hemisphere. This is called "diffuse lighting". The "albedo property" of a material tells how much gets scattered. So high absorbing(dark) surfaces have a low albedo. The reason why it scatters in all directions is because the roughness of a material (on a microscopic level).
In other words, photons bounce of a surface, and if they reach your eye, you can see it. The amount of photons you receive depends on your position, the position and intensity of the lightsource(s), and the albedo + specular properties of the material you are looking at. The cones in your eyes convert that to an image, or audio waves if you are tripping on something. So. your texture -now called albedo or diffuseMap- tells the shader how much diffuse-light(and in which color) it should bounce to the camera. Based on that information, the shader calculates the resulting pixel color.
…
diffuseLight = sum( diffuseFromAllLights ) * material.albedoProperty;
specularLight = sum( specularFromAllLights ) * material.specularProperty; 
pixelColor = diffuseLight + specularLight + material.Emissive

// Notice that indirect light (GI) is also part of the diffuse- and specular light
// However, since computing all these components correctly, GI is usually added
// afterwards. Could be a simple color, could be an approximation, could be anything…
pixelColor += areaAmbientColor * material.albedoProperty * pixelOcclusion;
This stuff already happened in old (90ies era) games, although on a less accurate level (and via fixed OpenGL/DirectX functions instead of shaders). One of the main differences was that old graphics-engines didn't calculate the diffuse/specular lighting per screen pixel, but per vertex. Why? Because its less work, that's why. Back then, the screen resolution was maybe 640 x 480 = 307.200 pixels. That means the videocard (or still CPU back then) had to calculate the diffuse/specular for at least 307.200 pixels. Likely more, in case objects would overlap each other the same pixel would be overwritten and required multiple calculations. But what if we do it per vertex? A monster model had about 700 vertices maybe, and the room around you even less. So a room filled with a few monsters and the gun in your hands had a few thousand vertices in total. That is way less than 307.200 pixels. So, the render-engine would calculate the lighting per vertex, and interpolate the results over a triangle between its 3 corner-vertices. Interpolation is a lot cheaper than doing light math.


DiffuseLighting & DiffuseMaps
--------------------------------
I already pointed it out more or less, but let's see if we get it (you can skip the code if you are not interested in the programming part btw). Pixelcolors are basically a composition of diffuse, specular, ambient and emissive light. At least, if your aim is to render "physically correct". Notice that you don't always have to calculate all 4 of them. In old games materials rarely had specular attributes (too difficult to render nicely), ambientLight was a simple fake, and very little materials are emissive (= emitting light themselves). Also in real life, most light you receive is the result of diffuse lighting, so focus on that first.

We (can) use a diffuse- or albedo image to tell the diffuse reflectance color per pixel. The actual lighting math is a bit more complicated, although most games reduced it to a relative simple formula, called "Lambertian lighting":
// Compute how much diffuse light 1 specific lamp generates
// 1. First calculate the distance and direction vector from the pixel towards
// the lamp.
float3 pixToLightVector = lamp.position - pixel.position;
float  pixToLightDistance = length( pixToLightVector ); 
       pixToLightVector = normalize( pixToLightVector ); // Make it a direction vector
// 2. Do Cosine/Lambert lighting with a dot-product.
float diffuseTerm = dot( pixel.normal, pixToLightVector );
      diffuseTerm = max( 0, diffuseTerm ); // Value below0 will get clamped to 0

// 3. Calculate attenuation
// Light fades out after a distances (because lightrays scatter)
// This is just a simple linear fall-off function. Use sqrt/pow/log to make curves
float attenuation = 1 - min( 1, pixToLightDistance / light.falloffDistance );

// 4. Shadows (not standard part of Lambert lighting formula!!)
// At this point (in this shader) you are not aware if there are obstacles 
// between the pixel and the lamp. If you want shadows, you need additional 
// info like shadowMaps to compute wether the lamp affects the pixel or not.
float shaded = myFunction_InShadow( ... ); // return 0..1 value

// 5. Compose Result
lamp1Diffuse = (diffuseTerm * attenuation * shaded) * lamp1.color;

// ... Do the same thing for other lights, sum the results, and multiply 
// with your your albedo/diffuse Texture


Wanting to learn programming this? Print this picture and hang it on the toilet so you can think about it each time when pooping. works like a charm.

You don't have to understand the code really. Just realise that your pixels get litten by a lamp if
A: The angle between the pixel and lamp is smaller than 90 degrees (dot)
B: The distance between pixel and lamp is not too big (attenuation)
C: Optional, there are no obstacles between the pixel and the lamp (casting shadows)

One last note about diffuseMaps. If you compare nowadays textures to older game textures, you may notice the modern textures lack shading and bright highlights. The texture is more opaque... because it's a diffuseTexture. In old games, they didn't have techniques/power to compute shadows and specular highlights in realtime, so instead they just drawed ("pre-baked") that into the textures to add some fake relief. Nowadays many of those effects are calculated at the fly, so we don't have to draw them anymore in the textures. To some extent, it makes drawing diffuseMaps easier these days. I’ll explain in the upcoming parts.



SpecularLighting & SpecularMaps
--------------------------------
Take a look back at one of the first pictures. Diffuse light scatters in all directions and therefore is view-independent. Specular lighting on the other hand is a more concentrated beam that reflects on the surface. If your eye/camera happens to be at the right position, you catch specular light, otherwise not. Specular lighting is typically used to make polished/hard materials “shiny”.

Again, this is one of those techniques that came alive along with per-pixel shaders. Specular lighting has been around for a long time (a basic function of OpenGL), but not used often in pre-2000 games. A common 3D scene didn’t (and still doesn’t) had that much triangles/vertices. Since older lighting methods were using interpolation between vertices, there is a big lack of accuracy. Acceptable for diffuseLighting maybe, but not for specular highlights that are only applied very locally. With vertex-lighting, specular would appear as ugly dots. So instead, games used fake reflection images with some creative UV mapping (“cube or sphere mapping”) to achieve effects like a chrome revolver. Nowadays, the light is calculated per pixel though, so that means much sharper results.

If you looke carefully, you can literally count the vertices on the left picture. Also notice the diffuse light (see bottom) isn't as smoothy, although less noticable.

That gave ideas to the render-engine nerds... So we calculate light per pixel now, and we can vary the diffuse results by using a diffuse/albedo texture... Then why not make an extra texture that contains the specularity per pixel, rather than defining a single specular value for the entire object/texture instead? It makes sense, just look at a shiny object in your house, a wood table for example. The specularity isn't the same for the entire table. Damaged or dirty parts (food / dust / fingerprints / grease) will alter the specularity. Don't believe me? Put a flashlight on the end of the table, and place your head at the other hand of the table. Now smear a thin layer of grease on the middle of the table. Not only does it reduce the specurity compared to polished wood, it also makes the specular appear more glossy/blurry, as the microstructure is more rough. You can only see this btw if the flashlight rays exactly bounce into your eyes. If you stand above the table, the grease may become nearly invisible. That's why we call specular lighting "view dependant".

Dirty materials, or objects composed of multiple materials (old fashioned wood TV with glass screen and fabric control panel) should have multiple specular values as well. One effective way is to use a texture for that, so we can vary the specularity per pixel. Same idea as varying the diffuse colors in an ordinary texture (albedo / diffuseMap). A common way to give your materials a per-pixel specularity property, is to draw a grayscaled image where bright pixels indicate a high specularity, and dark pixels a low value. You can pass both textures to to your shader to compute results:
float3 pixToLightVector   = lamp.position - pixel.position;
float  pixToLightDistance = length( pixToLightVector ); 
       pixToLightVector   = normalize( pixToLightVector );
float3 pixToEyeVector     = normalize( camera.Position - pixel.position );

// Do Cosine/Lambert lighting
float diffuseTerm = dot( pixel.normal, pixToLightVector );
      diffuseTerm = max( 0, diffuseTerm );
float3 diffuseLight = diffuseTerm * lamp.color;

// Do Blinn Specular lighting
float3 halfAngle  = normalize( pixToLightVector + pixToEyeVector );
float  blinn      = max( 0, dot( halfAngle, pixel.normal.xyz ) );
float  specularTerm = pow( blinn , shininessPowerFactor );
float3 specularLight= specularTerm * lamp.color; // eventually use a 2nd color

...apply shadow / attenuation...

// Fetch the texture data
pixel.albedo       = readTexture( diffuseMap, uvCoordinates  );
pixel.specularity  = readTexture( specularMap, uvCoordinates );

diffuseLight  *= pixel.albedo;
specularLight *= pixel.specularity;

Notice that this is not truly physical correct lighting, it's an approximation suitable for realtime rendering. Also notice that the method above is just one way to do it. There are other specular models out there, as well as alternate diffuse models. Anyway, to save some memory and to make this code a bit faster, we can skip a "readTexture" by combining the diffuse-and specularMap. Since the specularity is often stored as a grayscale value, it can be placed in the diffuseMap alpha channel (resulting in a 32-bit texture instead of 24bit). The code would then be
// Get pixel surface properties from vertices and 2 textures
float4 rgbaValue = readTexture( diffuseSpecMap, uvCoordinates ) 
   pixel.albedo   = rgbaValue.rgb;
   pixel.specularity = rgbaValue.a;

However, some engines or materials chose to use a colored specularMap, and eventually use its alpha channel to store additional "gloss" or "shininess" data. This is a coeffcient that tells how sharp the specular highlightwill look. A low value will make the specular spread wider over the surface (more glossy), while high values make the specular spot very sharp and narrow. Usually hard polished materials such as plastic or glass have a high value compared to more diffuse materials.
spec = power( reflectedSpecularFactor, shininessCoefficient )



EmissiveLighting & EmissiveMaps
--------------------------------
This one is pretty easy to explain. Some materials emit light themselves. Think about LEDs, neon letters, or computer displays. Or radioactive zombies. When using diffuse- and specular light only, pixels that aren't affected by any light will turn black. You don't want your TV appear black in a dark room though, so one easy trick is to apply an EmissiveMap. This is just another texture that contains the colors that are emitting light themselves. Just add that value to the result, and done. The shader code from above would be expanded with
resultRGBcolor = blabla diffuse blabla specular blabla ambient
resultRGBcolor = resultRGBcolor + textureRead( emissiveMap, uvCoordinates );
Notice that emissiveMapping is just another fake though. With this, your TV screen will appear even in dark rooms. And it may also get reflected and cause a "bloom"(blur) around the screen in modern engines. BUT, it does not actually emit light to other surfaces! It only affects the pixels you apply this texture on, not the surrounding environment... unless you throw some complex code against it, but most games just place a simple lightsource inside the TV, or ignore it completely.

The charming man on the TV and the lamp in the background use emissive textures. However, the TV does not really cast light into the scene (other than the blurry reflection on the ground).



I could go on a bit longer, but let’s take a break. I remember when I had to read this kind of stuff ~7 years ago… Or well, probably I didn’t really read it. My head always starts to hurt if I see more than 3 formula’s with Sigma and 10th order stuff. But don’t worry, just go out and play, you’ll learn it sooner or later, and documents like this will confirm (or correct) your findings. The next and last part will reveal the famous “bumpMaping” technique, plus describes a few more texture techniques that can be used in modern games. Ciao!

Thursday, October 4, 2012

3D survival guide for starters #1

Ever been discussing about new games with friends? Are you one of those guys/girls that go like "yeah yeah yeah, that's r-r-right" (what’s the name of that green annoying cartoondog?), once a friend mentions EngineX looks even better due parallax-normal-displacement-crunch-mapping, while you have no clue what the hell he is talking about? Are you one of those aggressive game-forum guys, arguing about whether the Xbox or PS3 looks better, but never really understand your own arguments? Or, maybe you are just curious what those other nerds are talking/arguing about, when they drop fancy-pants words like “specular-map”? If you are already a 3D artist or did shader programming, you can safely skip, but otherwise this and 1 or 2 upcoming posts may make you sound even cooler on those forums. Or well, at least you know what you’re talking about then.
Not a superb shot this unfinished room, but can decipher the techniques being used here?

There's a lot of shit arguments on the internet all right. For some reason, boys want to dominate digital discussions by saying difficult tech-terms, like a dog marks territory by pissing against poles. And the funny thing is, a lot of arguments are based on "something they heard", and then misinterpreted. I remember the “Next-Gen” vibe 5 years ago. New technologies are treated like magical ingredients only possible on platform-X, but the truth is that any modern platform can do all tricks, as long as the programmers are smart enough and the platform fast enough to handle it realtime (meaning at least 25 times per second). A Wii can do Parallax-Occlusion-Mapping or realtime G.I. too, but at such a large cost it's unlikely to be implemented in a game. If you really want to compare game consoles on visual capabilities, then look at the videocards, available memory, and shader instruction sets.

Either way, good graphics start with five main components:
A: Proper Lighting
B: Proper Texturing
C: 3D models / worlds
D: Technicians & Hardware making it possible
E: Creative people making good use of A,B,C and D
How you do it, doesn't matter until you start meeting the platform limitations (which is pretty soon usually). These posts focus on the crucial component “Texturing”, and the technology behind it. Oh, for starters, with a texture we mean the images being pasted on 3D models. High-end engines can still look like shit when using bad textures. Low-end engines can still look good when using good textures. Usually when we develop a new Tower22 environment, the first versions look pretty bad. The same happens when a amateur makes a map in a UDK or whatever powerful engine. Techniques such as shadows and reflections mask the uglyness a bit, but in the end it's still like smearing expensive make-up on a pig. Then on the other hand, games such as Resident Evil4(Gamecube / PS2), God of War(PS2) or Mario Galaxy(Wii) still look good while their engines really weren't exactly cutting edge technical miracles. Neither back then, compared with PC engines such as Source(Halflife2), CryEngine(Farcry) or IdTech (Doom3/Quake4).
How your grandpa used to Render:
Ok, but how does it work? Probably you heard about bumpMaps and stuff, but maybe no idea what it really does. Let's just start with the very basics then. Remember Quake1 (1996)? That is one of the first (if not the first commercial) true 3D games, where the worlds were made of 3D polygon models, using textures to decorate the walls and objects... Or maybe It wasn’t the first 3D game actually. The Super Nintendo already had a couple of games using the Super FX chip, like Star Fox and Super Stunt FX. These games rendered flat (animated) pictures called "sprites", and simple 3D geometry shapes (triangles, cubes, floor-plane) with a certain color. The space ships for example were a bunch of gray and blue triangles, and an orange triangle as a booster. Not much detail of course, but some of those triangles even carried an image to give it some more detail. See the cockpit above.

And also Doom, Hexen, Wolfenstein and Duke Nukem were sort of 3D (“2.5D”) of course. Although the technology used back then is way different than Quake1 used, which is still the footprint for nowadays games. The way how Wolftenstein actually looks more like how a raytracer works; each screen pixel (and the screens didn’t have much pixels back then fortunately) would fly away from the camera, and see where it would intersect a wall, floor or ceiling. Because of the limited CPU power, the collision detection had to be fast of course, and that explains the very simple level design. But what Wolftenstein also already did, was Texturing its walls. Depending on where the screen-ray intersected, the renderer would pick a pixel from the image applied on that wall.
Although old 2.5D rasterizers don't really compare to current techniques, the texture-mapping (texture-coordinates, UV-mapping, whatever they call it) is founded on the same principles.



Texturing in the 21 century
Texturing basically means to "wrap" a 2D image over a (3D) model. Typical game data therefore contains a list of 3D-models and image files, paired together. Asides having triangles, the 3D models also tell how to map the image on them by giving so called “Texture” or “UV” coordinates. As for the textures, those are just images you can draw MS Paint or Photoshop really. Nothing special. Although games often use(d) compressed files to save memory. The SNES didn't have enough memory & horsepower to texture each and every 3D shape, so most objects in those 3D games just used 1 or a few colors only.

You can do the math; an average jpeg photo from your camera or phone already takes a few megabytes. In other words, a single photo is larger than the total storage capacity of a SNES cartridge! But if you make a tiny image, with only a few colors, you can save quiet a lot though. Save a 32x32 pixels image as a 16-color bitmap, and it will only take 630 bytes = 0.62 KB = 0,0006 MB. Now that's more like it. Only problem was, the SNES still only had super little RAM memory, a matter of (64?) kilobytes. So it could only keep a few things active in its work-memory at a time.

Quake1 fully utilized texturing though. Every wall, floor, monster, gun or other object used a texture. Of course, PC's back then still had very limited memory and data bandwidth, so the textures were small and only had 256(or less?) color palettes. But for that time, it looked awesome. Either way, since Quake1 also became "Mod-able" for hobbyists at home, we ordinary gamers came in touch with editors, 3D models, sprites, UV coordinates, textures, and whatsoever. So, in resume:
- You make a 3D model (with 3D software such as Max, Maya, Lightwave, or game-map builders. Those are often made by hobbyists, or provided with the game(engine) itself.
. A game model is a list of triangles. Each triangle has 3 corners, called "vertices". A vertex is a coordinate in 3D space, having a X,Y and Z value. It can carry more (custom) data, but we talk about that later.
- so, yhe 3D model you store is basically a file that lists a bunch of coordinates
- An image is made for the object. Since 3D objects can be viewed from all direcetions, we need to unfold (unwrap) the 3D model all over the canvas.
- Each vertex(corner) also has a texture-coordinate (a.k.a. UV coordinate). These coordinates tell where the triangles are mapped on the 2D image.

The selected green triangles on the right are (2D) UV-mapped in the texture canvas on the left

That's how Quake1 worked, and that's how Halflife3 will still work (although... HL3 might not appear this century, who knows what technology we have then). So, basically this means all 3D objects will get their own texture. We also make a bunch of textures we can paste on the walls, floors and ceilings. Like putting wallpaper in your own house. Obviously, the texture quality goes hand in hand with the artist skills, and the dimensions of those textures. A huge image can hold a lot more detail than a tiny one. These days, textures are typically somewhere between 512x512 to 2048x2048 pixels, using 16 million colors. The file and RAM usage size varies somewhere between 0.7 and 4 MB for such textures, if not compressed.

The results also depend on how the UV coordinates were made. You can map a texture over a small surface, so the small surface receives a lot of pixels relative (but also repeats the texture pattern all the time). Or you stretch the texture over a wide surface, making is appear blurry and less detailed. This typically happened on large outdoor terrains in older games. Even if you have a 1000x1000 pixel texture, a square meter of terrain will still receive only 2x2 pixels if it’s about 500 x 500 meters big. Ugly blurry results. Games like the first Battlefield often fixed that by applying a second “detail texture “ over the surface, which repeated a lot more times. Such detail texture could contain the patterns of grass or sand for example.
We still use detail-mapping these days. The small wood-nerve structure on the closet is too fine/detailed to draw in the closet texture. So we use a second "wood-nerve" texture that repeats itself over the surface quite a lot of times.


Next time we crank up with lighting in (modern) engines and the kind of textures being used to achieve cool effects for that. For now, just remember textures are used to give color to those 3D models. Nothing new really, but an essential part of the process. As for the next X years in rendering evolution: Textures are here to stay, so deal with them. Keep your friends close, and your enemies closer -- Sun Tzu