Thursday, May 31, 2012

Making of Radar demo #8: Morphing Animations

As promised, a Blog update about monster-animations without the usual delay. Ready for the European Football tournament btw? I'm ready, at least for drinking beer in the pub while the match is playing on a screen somewhere behind me. Let's hope Portugal, Germany or Denmark doesn't end my good excuse to visit the pub in the middle of the week abruptly.


Right. The "RadarBlob" monster wasn't supposed to be animated at first. Simple, lack of time. I still need to upgrade the entire skeleton-animation system. Support for other files (now it's only Milkshape...), additive blending, making good use of modern GPU techniques, ragdolls, et cetera. Another reason for skipping animation was the lack of a good animator. Which is also the reason I still haven't implemented a renewed system. First I want a human player or monster with good animations. Sorry, but I can't code blind or on "dummies", need real test-subjects!

But... while looking at the static, "frozen", monster, I wondered how the heck we could finish that demo movie a little bit spectacular. The model, textures and shading had been improved, but other than that it was as interesting as a vase. It would look even more ridiculous if the sound would be playing dangerous music, angry monster digesting sounds and steam blowers while nothing would really happen visually. No, we needed movement, even if it was something simple. But how to do that fast & easy? The answer: Morphing Animations.


Teenage Morphing hero Blobs
------------------------------
Morphing. It sounds like a technique the Power Rangers would use, but in 3D terminology it means as much as changing shape-A into shape-B. It's pretty simple, and as well an ancient technique. Quake1 already used morphing animations for its monsters and soldiers. How it works? Imagine a ball made of 100 vertices. New 3 seconds later, imagine the same sphere, but squeezed. The 100 vertices moved to another place in order to give a "squeezed" appearance to the same ball. The initial pose and the squeezed pose 3 seconds later can be called 2 "Keyframes". If we store those keyframes into the computer memory (thus storing all 100 vertex-positions per keyframe), we can interpolate the vertex positions between those 2 keyframes over the timeline. Useful, so we don't have to store hundreds or thousands of frames.
for each vertex
.....currentVertexPosition = lerp( frame1VertexPos, frame2VertexPos, frameDelta );
* frameDelta = a value between 0 and 1. 0.25 means we're at 25% towards frame2


The math is pretty simple, and also the file-formats are straight forward. Just store the vertex-positions (and normals) at certain keyframes. Another important note, Morphing animations are very flexible, making it suitable for organic abstract shapes like our RadarBlob. Unlike Skeleton animations where the vertices are bound to a bone, we can place the vertices everywhere we like. You can change a humanoid into the Hulk, or a cube. Just as long the vertex-count and their polygon-relation stays the same.

Yet Morphing animations aren't that common anymore. They have some serious issues. First, in the old Quake times, models were much simpler. A relative low vertex-count (a few hundred or so), and just a few, relative simple, animations. These days our monsters have much higher polycounts, and more + longer animations. The CPU would have to loop through much bigger vertexlists to interpolate all positions, and also the memory would get a hit to store it all. For example, our RadarBlob has ~8.000 vertices. In optimized form with indices, it has ~3.100. It would mean the CPU has to update 3100 vertices, for each monster, each frame. And storing a single keyframe would cost at least 3100 x 12(bytes) = 36 kb. In practice it doubles, as you may also want to store normals.


Why Skeletor is more powerful
------------------------------
It's not that a modern CPU wouldn't be able to deal with these numbers. Hey, don't forget the hardware also grew in numbers since the Quake era. Yet it feels wrong to do it this "brute force" way. And it is wrong, I'll show you how the GPU can help down below. But last but not least, another good reason why Skeleton animations took over are the static restrictions. You can calculate the vertex-positions at the fly, but more likely you'll read them from an animation file (like the good old MD2 files). The animations here are "fixed", you can't just alter specific body parts during the animation. For example, having the upper-body or head/eyes follow a dynamic target gets difficult. Ragdoll animations, which is based on fully dynamic behavior, calculated by collision volumes falling on the ground, are nearly impossible in combination with Morphing animations. You can use Verlet or Cloth physics to alter the vertex-positions, but it will make the character fall like a combination between pudding & a deflated sexdoll.

Skeleton animations do not store vertex positions or anything. Instead, it stores a "skeleton", a bunch of joints and their relations ("bones"). Vertices on their turn will be assigned to one or more bones. Your left hand for example would be assigned to the "Left-wrist" bone. Fingers on the same left hand are assigned to sub-bones. If the wrist rotates or moves, all sub-bones rotate and move along with it, and so will the assigned vertices do. Yes, in essence this still means we have to recalculate all vertex-positions individually by multiplying their positions with their host-bone matrices. But skeletons have three major strengths over Morphing:
1- You only have to store the joint-matrices(or quaternion’s) per keyframe. An average humanoid game-skeleton only has 30 to 50 joints or so. Saves a lot of RAM, and makes the files smaller.
2- You can dynamically alter a single, or multiple bones, and all child-bones + their vertices will nicely follow. Very useful for aiming, looking at, ragdoll physics, IK, or other dynamic behavior that can't be stored in pre-calculated animation files.
3- You can easily combine animations. The legs run while the upper-body shoots bazooka's, while the face talks shit.


Morphing, the Revival
------------------------------
Ok, we know now why we shouldn't use Morphing, but yet we did. Look, if you make use of modern techniques, Morphing can still be a faster solution than skeletons, it's easier to implement, and it still suits better with organic shapes. I mean, how the hell would you make a suitable skeleton for this abomination?
I tried, but no…

We didn't need bullet-Time Trinity animations, just a disgusting sack of hydraulic blubber breathing a bit. So, Morphing would be a fine choice sir, the waiter said. But how to get it a bit fast? First we would need to get rid of the CPU. I'm not a fan of moving *everything* to the GPU just to say "Got a 100% GPU solution here!", those Quad-cores need to move their lazy asses as well. But it's just a fact that GPU's are much faster when it comes to process big array that require vector math. Updating vertices on the CPU would be a disaster anyway, as it would prevent you from using Vertex-Buffer-Objects, unless you stream back the updated vertices each cycle. No go.

The VBO just contains the monster in its original pose. When we render it, the Vertex-Shader will do the position-interpolation math, instead of a CPU. The math is simple, just "lerp" the vertex between the current and the next-frame vertex-position, then proceed as usual.
 // Get the vertex positions for the current and next frame 
 float3 frame1Pos= tex2D( positionTex, frame1TX ).xyz;
 float3 frame2Pos= tex2D( positionTex, frame2TX ).xyz;
 // Interpolate
 float3 vPos = lerp( frame1Pos, frame2Pos, frameDelta );
 // Output
 out.vPos = mul( modelViewProjMatrix, float4( vPos, 1.f ) );
But... how does the Vertex-Shader know what the current and next frame positions are? Easy Does It, we use a texture. This (16 bit floating point) texture contains ALL vertex-positions for ALL keyframes. That's sounds like a whole lot, but don't forget a whole lot pixels fit in a 2D image. A single RGB pixel can hold a XYZ position (in local space), so do the math:
* RadarBlob: 3100 vertices
* 256 x 256 2D texture = 65.536 pixels
* 65.536 / 3100 = 21

In other words, a 256 x 256 image would be able to store 21 keyframes for this particular model. When using a 512x512 texture the number quadruples, and don't forget you could use a different image for each animation eventually. Anyway, the Vertex-Shader has to fetch 2 of those pixels for each vertex. You can fill the image anyway you want, but I just filled it the optimal way. Each vertex gets an unique ID, a number between 0 and 3100 in this RadarBlob-case. This ID is stored along with the vertex in the VBO. In my case I stored it in the Texture coordinate.z. So the texture-lookup index of a vertex could be calculated as follow:
uniform int   frameNumber. // Current keyFrame index (0..x)
uniform float frameDelta // Current position between current and next frame (0..1)  
 
// index = Frame offset  +  vertex offset within frame
// index = 3100 * frameNumber + vertexID
int frame1Index = modelVertexCount * frameNumber +  in.vertexTexcoord.z;
int frame2Index = modelVertexCount * (frameNumber+1) +  in.vertexTexcoord.z;

// Change the 1D lookup index to a 2D texture coordinate, for a 256 x 256 pixel image
float2 frame1TX = float2( frame1Index % 256, floor(frame1Index / 256) );
float2 frame2TX = float2( frame2Index % 256, floor(frame2Index / 256) );
 
// Add half a texel to access the center of a pixel in the texture.
// You also may want to turn of linear-filtering for the 256x256 texture btw
const float2 HALFTEX  = float2( 0.5f / 256.f, 0.5f / 256.f );
 frame1TX += HALFTEX;
 frame2TX += HALFTEX;
 
// Get the vertex positions for the current and next frame 
float3 frame1Pos= tex2D( positionTex, frame1TX ).xyz;
float3 frame2Pos= tex2D( positionTex, frame2TX ).xyz;

// Interpolate
float3 vPos = lerp( frame1Pos, frame2Pos, frameDelta );
// Eventually you can also lerp between the original pose if you like dynamic control
// on the animation "influence"
 vPos = lerp( in.originalVpos.xyz, vPos, animationInfluence );
 
// Output
 out.vPos = mul( modelViewProjMatrix, float4( vPos, 1.f ) );

That's pretty much it. Feed the Vertex-Shader a texture that contains all animated positions, and you're good to go. Uhmmmm... how to get those textures? I quickly made a little program that imports a sequence OBJ files. Then it would just loop through all vertices and store it in an array that would be suitable to build a OpenGL texture with later on:
for each OBJfile
.....for each vertex in OBJfile
..........array[index++] = vertex.xyz

Sounds easy, and it is pretty easy, yet I'll have to WARN about a few things:
* Make sure all OBJ files have the exact same vertex-count and order. If one file has a different storage, your animation will turn into a polygon massacre.
* In case you want to smooth / share vertices to make use of indices, do it before storing them into this texture. The numbering and order must match with the model VBO in your (game)app later on.
* Centrate the OBJ files in the same way you would do in your program, or you'll get an offset on all coordinates.


Whaaa, FLAT shading?!
------------------------------
If you try the code above, it seems to work nicely at a first glance, but take your magnifier and flashlight Sherlock. See that? The lighting on the models seems.... weird. The RadarBlob breaths, but the lighting doesn't seem to change along with the movement. No shit Sherlock, that's because you didn't alter the normals yet (unless you already got suspicious and added some more code ;))!

If you rotate a polygon, the normal has to rotate with it in order to keep the lighting results correct. Only problem is that you can't do this in a vertex-shader, unless you know all neighbor vertex-positions as well. That's possible, but it requires a lot more sampling and duplicate calculations just to get the normal correct. Good thing we have Geometry Shaders these days. Geometry Shaders are actually aware of the entire polygon, as it takes primitives for breakfast. In other words, you'll get the three (morphed) vertex-positions, so you can relative easily recalculate a normal and eventually the (bi)Tangents as well.


Problem solved? If you love FLAT shading, then yes. Otherwise, you prepare to get shocked. The lighting will be correct, but the smoothing seems to be entirely gone. What happened?! Congratz, you just screwed up the smoothing and found out how flat shading works. Making a smooth shade basically involved bending/averaging normals on polygons that share the same vertices.
Your Geometry Shader however just calculated the (correct!) normal for each single triangle. What it should do is smooth the normals with neighbor triangles but... again, that is not possible unless you store & pass additional data for each vertex. By default, a GS has no access to neighbor primitives.

The good old CPU morphing methods didn't just store the altered vertexpositions for each keyframe, it also stored the (bended) normals, and interpolated between them. So, why not just take the easy route and do this as well? Make a second texture that contains the normals, in the same fashion as we did with the vertex-positions. Oh, and don't forget to smooth the model already BEFORE you insert the normals into this texture! Then in the vertex-shader, also sample the 2(or 3) normals and interpolate them.
float3 frame1Nrm= tex2D( normalTex, frame1TX ).xyz;
float3 frame2Nrm= tex2D( normalTex, frame2TX ).xyz;
float3 vNrm = lerp( frame1Nrm, frame2Nrm, frameDelta );
.......vNrm = normalize( vNrm ); // don't forget. You naughty boy.

Big chance you're using normalMapping as well, so you will also need the tangents and maybe biTangents. You could either make some more textures, but if you are concerned about having so many textures, you can also give the Geometry-Shader a second chance. Now that the GS received smoothed normals, it will calculate smoothed (bi)Tangents as well:
TRIANGLE
TRIANGLE_OUT
void main(  AttribArray iPos  : POSITION,
                AttribArray iTexcoord  : TEXCOORD0,
                AttribArray iNormal  : TEXCOORD1 // Smoothed!
         )
{
 // Just some remapping, lazy code
 float3 vert[3];
  vert[0] = iPos[0];
  vert[1] = iPos[1];
  vert[2] = iPos[2];
 float3 nrm[3];
  nrm[0] = iNormal[0];
  nrm[1] = iNormal[1];
  nrm[2] = iNormal[2];
 float2 tx[3];
  tx[0] = iTexcoord[0].xy;
  tx[1] = iTexcoord[1].xy;
  tx[2] = iTexcoord[2].xy;
 float3  tangent[3];
 float3  biTang[3];
  
 for ( int i=0; i<3; i++)
 {
  /* SORT */
  if ( tx[0].y < tx[1].y )
  {
   float3  tmpV = vert[0];
    vert[0] = vert[1];
    vert[1] = tmpV;
   float2 tmpTX = tx[0];
    tx[0] = tx[1];
    tx[1] = tmpTX;
  }
  if ( tx[0].y < tx[2].y )
  {
   float3  tmpV = vert[0];
    vert[0] = vert[2];
    vert[2] = tmpV;
   float2 tmpTX = tx[0];
    tx[0] = tx[2];
    tx[2] = tmpTX;
  }
  if ( tx[1].y < tx[2].y )
  {
   float3  tmpV = vert[1];
    vert[1] = vert[2];
    vert[2] = tmpV;
   float2 tmpTX = tx[1];
    tx[1] = tx[2];
    tx[2] = tmpTX;
  }  
  
  /* CALCULATE TANGENT */
  float interp;
  if ( abs(tx[2].y - tx[0].y) < 0.0001f ) 
   interp = 1.f; else
   interp = (tx[1].y - tx[0].y) / (tx[2].y - tx[0].y);
   
  float3 vt  = lerp( vert[0], vert[2], interp );
   interp = tx[0].x + (tx[2].x - tx[0].x) * interp;
   vt     -= vert[1];
   
  if (tx[1].x < interp) vt *= -1.f;
  float dt = dot( vt, nrm[i] );
   vt     -= nrm[i] * dt;
   tangent[i] = normalize(vt);
     
   
  /* SORT */  
  if ( tx[0].x < tx[1].x )
  {
   float3  tmpV = vert[0];
    vert[0] = vert[1];
    vert[1] = tmpV;
   float2 tmpTX = tx[0];
    tx[0] = tx[1];
    tx[1] = tmpTX;
  }
  if ( tx[0].x < tx[2].x )
  {
   float3  tmpV = vert[0];
    vert[0] = vert[2];
    vert[2] = tmpV;
   float2 tmpTX = tx[0];
    tx[0] = tx[2];
    tx[2] = tmpTX;
  }
  if ( tx[1].x < tx[2].x )
  {
   float3  tmpV = vert[1];
    vert[1] = vert[2];
    vert[2] = tmpV;
   float2 tmpTX = tx[1];
    tx[1] = tx[2];
    tx[2] = tmpTX;
  }
    
  
  /* CALCULATE BI-TANGENT */
  if ( abs(tx[2].x - tx[0].x) < 0.0001f ) 
   interp = 1.f; else
   interp = (tx[1].x - tx[0].x) / (tx[2].x - tx[0].x);
   
   vt  = lerp( vert[0], vert[2], interp );
   interp = tx[0].y + (tx[2].y - tx[0].y) * interp;
   vt     -= vert[1];
   
  if (tx[1].y < interp) vt *= -1.f;
   dt = dot( vt, nrm[i] );
   vt     -= nrm[i] * dt;
   biTang[i] = normalize(vt);  
 } // for
 
 // Output triangle
 emitVertex( iPos[0] : POSITION,  iTexcoord[0] : TEXCOORD0, iNormal[0] : 
                    TEXCOORD1, tangent[0] : TEXCOORD2, biTang[0] : TEXCOORD3 );
 emitVertex( iPos[1] : POSITION,  iTexcoord[1] : TEXCOORD0, iNormal[1] : 
                    TEXCOORD1, tangent[1] : TEXCOORD2, biTang[1] : TEXCOORD3 );
 emitVertex( iPos[2] : POSITION,  iTexcoord[2] : TEXCOORD0, iNormal[2] : 
                    TEXCOORD1, tangent[2] : TEXCOORD2, biTang[2] : TEXCOORD3 );
} // GP_AnimMorphUpdate

Hard to notice, but another little animation was oil streaming down. Just a timed fade-in of an oil texture. To make it "stream", the fade-mask moved from up to down.

Final tricks
------------------------------
We just made a Morphing solution that uses modern techniques to optimize performance such as VBO's (allowing to keep all data stored on the GPU instead of transferring vertex-data each time), without bothering the CPU to do the interpolation math,

Two more tricks I'd like to explain is having an "influence factor", and updating data inside a VBO. Morphing animations just aren't flexible when it comes to dynamic controls. But there is at least one simple trick you can apply: "influence". In the demo movie, you'll see the RadarBlob breathing much faster and more intense at the last seconds. We didn't make multiple animations though. We just speeded up the animation-timer, and increased this mysterious "influence factor". Well, if you took a good look at the code you already saw how it works: you just do a second interpolation between the original vertex pose, and the animated pose.

Last but not least, don't forget you can actually store the updated positions/normals into a (second) VBO. In the case of Tower22, this monster will get rendered many times per cycle. Three shadowcasting lamps are on its head, so it will appear in their depthMaps. Also the water reflection and glossy wall reflections will need this monster. All in all, this guy may get rendered up to 12 times in a single frame. Now the interpolation math isn't that hard, but the recalculation of the tangents and all the texture applies concerned me a bit. So instead of re-doing all those steps for each pass, I update the monster VBO first, using Vertex-Streaming / Transform-Feedback. So first store the morphed vertex-positions/normals/tangents for the current time into a secundary VBO, then for all passes just apply the 2nd VBO so we don't have to calculate anything anymore. See the links below for some details about this technique:
http://tower22.blogspot.com/2011/08/golden-particle-shower.html

Case closed.

Thursday, May 24, 2012

Making of Radar demo #7: a little bit of koreander on top

Almost through. To conclude this "Making off", I'd like to cover how we made / animated the monster, finalized the rooms, and last but not least, how the sounds were done. But that's for the next post, as I still have to ask David how he did it. My audio-knowledge doesn't go much further than a FMOD implementation and MC Hammer :)

Pimp my Radar
------------------------
As we arrived in December, quite a lot of the textures and assets had been done by then. Yet, some rooms still felt empty, out of place, or just not right. Placing another concrete wallpaper or toying around with decals (those are transparent overlays such as the wall-dirt, cracks, cigarette butts or signs) can help a lot, and also simple light-flare sprites added a lot of swing. But even better was to grab Julio's hand and walk through each room for final adjustments.

So you made a bunch of rooms with fancy graphical tricks and quality textures. But what is the "message" or function of each room? As mentioned before, by nature a concrete bunker isn't the most exciting place. To make the (rather long) movie-fly-through somewhat interesting, each room needed at least one eye-catcher. For example, the otherwise boring tunnels were filled with green lamps and airvent "gasses". For each room, we took a few screenshots, and then Julio Photosouped them. Mainly the light-setup was enhanced (ambient light, contrasts, fog), and objects or decals were added. A hole in the left wall, some wires on the right wall, empty bottle on the table, poop on the ceiling fan, that kind of stuff.

Those "perfected" versions of the rooms then got back in the mailbox, and were used as a master-reference model. Basically my task was to tweak shaders, lights or the scene setup until it matched as close as possible with the reference image. And in some cases, those images caused some extra modeling/texture work for Sergi and Julio. Other floors, pipes, canisters, et cetera. Maybe not the fastest method to get things done, but all in all, the quality of the rooms got a serious boost. Below a short overview of what the Nanny did in the Radar household.
You know those TV programs where a bunch of guys (+ a woman standing in their way) making over your house?
The barracks: > Added junk, rusty beds, cold wind from the outside, light flares
The dressroom:> Wires with plastic sheets, large rusty ceiling fan
The stair: > Added some pipes, a lamp, and a canister
Central room > Snow and wind. Added particle clouds, a better skybox, red lamps.
Toilets > We actually hided that useless room with lightshafts :)
ControlRoom > Terminal, alarm, red lights
Lower central > Water, waterdrips, more lamps, a bit of snowdust clouds
Tunnels > Green lampflares, airvent "clouds", large trunks, rack
Monsterroom > Ice, particles, monster, nitro-tanks
Warehouse > Ceiling pipes, floor damage, wet floor pools, filled the racks


Call me "RadarBlob"
-----------------------
Another last-minute secret guest was our monster -his name is "RadarBlob" by the way, nice to meet you too-. Even with better looking rooms, the whole demo trip was still a bit dull. Originally I planned to show some physics instead. Throwing barrels from the stairs and into the water. But since there were too many physics issues for a good Bob Hope show, (at that time we were upgrading to a newer version of the Newton physics engine as well) something else had to draw the attention. Since Tower22 is a horror-game, why not do something with erh… monsters?

With the limited time, we had to keep the monster simple. No AI or complex scripted stuff, and neither animations (hence we don't even have a real animator yet). The room where the monster was placed was just a meaningless space so far. Filled with water... It would have been quite an anti-climax to end the movie in that room, so I made a drawing of a turd-like thing, connected with hydraulic hoses to the ceiling. Yeah, I love the monster & mechanics combi. Reminds me of Doom, Quake, and all the hydraulic systems we deal with at my work.
Uhmmm... luckily Robert quickly made a more impressive, muscular turd variant. So, while we were pimping the rooms, Robert made a high-poly model and first-version textures. The first in-game versions still looked a bit dull though. It needed to be bigger in order to become a bit scary, and thus a new room with a higher ceiling. Also the specular lighting required a lot of tweaks to make it more nasty and icky. Unfortunately there was no time to make an advanced skin lighting technique (on the wish list, for sure), so I just uses sharp specular highlights and a bit of “RIM” (wrap-around or “backlighting”) to fix it. Furthermore, Julio upgraded the textures and added some ice chunks as well to add some sense in the scene. Icy water next and nitro-tanks… Let’s unfreeze the beast during the last demo minute.
Still sucks.
Ah, have a Snickers, and there was Julio's master reference drawing. It's nice to be creative.

Maybe using some kind of animation wouldn't be bad either. Leaking oil streams, steam particles blowing out, and a "breath" animation to make it come alive. Only problem was/is that the skeleton-animation features haven't been updated in the engine yet, and we ran out of time. Asides, using bones for an organic blob like this probably wouldn't be the most handy way to animate it anyway. So instead, we went for good old "Morphing" animations.

In the next post, that will hopefully quickly follow for a change, I'll show some more in-depth details about how we did that.

Sunday, May 13, 2012

Loading... please wait

What the hell is wrong with computers, or wait, what the hell is wrong with software these days? You would expect that future computers can handle our typical tasks without a sweat, but that's not exactly the case. Are the rising hardware-specs fooling us, or does the software get slower each iteration? Here an emotional plea from someone who still gots bothered by sluggish, hanging, syrup computers.


Holy shit, Intel-Inside!
In 1998 -I'll pick a year that Windows started working a bit-, we were able to browse the internet –still an ‘innocent’ toy back then-, write an e-mail(what?!), print a Word(Perfect) document, manage the disk via Explorer, and even better, play Halflife. Of course, we also had charming blue-screens and it wasn't all that fast. Booting the computer took centuries, you couldn't open too many programs at the same time, and internet was as slow as the brown paste Robocop eats. But that was mainly due the cables and hyperslow modems, being interrupted by a malformed robot-voice of your mother if she tried to call at the same time.

I don't know the exact numbers, but in our house, I believe we had a Pentium 233 or 300 at that time. Single core of course. The average user probably thought “multi-threading” was another word for warpspeed in Startrek back then. Memory... 128 MB or something? There was a Soundblaster, a 4 or 8 Gb harddrive called "Bigfoot" to make it sound even more awesome, and a 15 inch monitor that could be used to fire holes in boats with a pirate cannon. A 16x speed (16!) CD-Rom drive. And a disk-drive, just in case you needed to fix Windows. And yes, I've seen my father doing that a few thousand times. Not sure whether that was really necessary, or he just liked screwing around with Windows and the BIOS. We never really had stabile computers, that’s for sure.

Videocards. These days, for me, the videocard is the most important instrument in a computer. But back then it was a piece of luxery, not really needed. I believe we had a Voodoo 'something' card. And I never understood how it would make games run faster or more beautiful. Don't blame me, 1998 was also the year I started programming (after some Q-Basic in 1997). Anyhow, “Software rendering”, which means the CPU did all the 3D work instead of a specialized piece of hardware, was common.
Tower22 has become kitchen-interior rendering software.

Holy shit, 4 Intels inside!
Anyway, compare those numbers with what we have now. At least two or four 2.000 to 3.000 mHz cores. In dumb-theory, that should be about 20 to 40 times faster than the Pentium 300. In reality, it might be even faster for very specific tasks that fully utilize parallel processing, since Intel and AMD spend a lot of magic in Multithreading, Hyperthreading, and whatsoever. RAM memory exploded from 128/256 MB to 4 gigs, or 8+ in case you have a 64-bit system. At least 16 times more memory to work with (+ faster chips/buslines). Harddrives? I remember removing "big files" of 1 megabyte or more once in a while to make space for a new 300 MB game. Now I still have to remove "big programs" once in a while to make space on the 300 Gb drives. And 300 Gb isn't that much really, people manage to stuff terabytes with games/videos/porn. For the info, 1 terabyte is enough to back-up 125x 8Gb disks, or capture 212 single-layer DVD's. Hence, old disks couldn't even store 1 DVD. Then again DVD didn't exist yet, instead we had ~700 MB CD-Roms. I won't compare CD-Rom reading speeds. Last time I used that thing was.... no idaa. USB and online file transfer took over. As we laughed at our fathers with their 8” floppies and LP’s, our kids will laugh at us Compact-Disc generation.

Last but not least, we have videocards these days. Big expensive ones. Videocards on themselves may have 1+ gig of memory (can be used to hold your game geometry and textures for example), and a hotdamn fast set of GPU's. Now I can clearly see the videocards doing their work when it comes to running games. Every 2 or 3 years, I'll buy a new card (or sooner in case it burned again due stuck fans). And a game like Tower22 often doubles or triples the framerate. Good job. As for you, just compare your PC game collections. 1998: Halflife, Sin, Carmageddon II versus 2010+: Crysis2, Battlefield, GTA V (almost!), ... Now as an old whiner I'm not saying all games are better these days, but if you can't see the (graphical) progress, you're as blind as a beaten-up mole.


Holy shit, nothing happens inside!
…Then WHY am I not seeing this progress in other software? When writing an e-mail, Windows Live mail often hangs for 10 or 20 seconds because it's doing... something. An e-mail! Just text! Nice to have 4Gb RAM on my 32-bit system, but ~50% is used by Windows and background trash (yes, I check the starting-up programs with msconfig). Why is a chatting program like Skype using 100 MB? Internet is shit too. On my comp, there are always 4 to 8 pages open in the background. Consuming hundreds of megabytes. And I'm not talking about Radioplayers, video-streamers or Flashgames. Just forums and stuff. Pfff, Flash Games. how is it possible a simple zombie-killer flashgame takes up almost 100% CPU? It's a fucking Flash game, not Battlefield 6000. The NES did a better job. Using Chrome here by the way, Internet Explorer is even worse. If I click the "e" icon, I want to Google something within the next 3 seconds. Not first wait half a minute. Each iteration of IExplorer seems to get worse, even though they threw away a lot of useless features only housewives who had followed an internet-course would use. Jesus Christ, we have Fiberglass here, and it still feels like pushing turds through a 8mm plastic pipe.

More to complain? Sure, how about booting up. No matter how many times I tell Adobe to get the fuck out, it still keeps coming with updates. Every time. Is Adobe Reader so crappy it really needs an update every day? Probably it just doesn't install its updates very well, and keeps asking. Talking about updates. What on Earth is Windows Vista doing? Even if my computer is shut off from the internet, it still manages to find "updates" sometimes. And of course that always happens if I need to turn off quickly, or unannounced in the middle of the night. The Toshiba Laptop battery-alarm suddenly starts screaming like a Russian nuclear bomb silo. What happened? Vista decided to restart the (closed) laptop suddenly, and of course dozens of opened text/image files weren't saved. Yes I'm probably doing something wrong, but explain that to your grandma. Auto-update ok, but you'll have to be retarded to make a feature work like that.
Toying around with blurry reflections and glossy specular highlights for materials like this linoleum floor last week.


Sometimes it feels as if computers are slowing down deliberately after 1 or 2 years. Back in the old days you would buy a TV or VCR, expecting it to work for at least 60 years so your grandchildren could inherit it in case cold-bad times would come. Nowadays, everything falls apart after a few years. Hey people have money enough, make sure they buy our shit on a regular base. Same with mobile devices. Apart from disintegrating after a year, they keep relative slow as well. I’m pretty damn sure an average phone or industrial handheld still can’t do the good old Pentium tricks like running Halflife (software render) on a 800x600 resolution while downloading "Intergalactic"(RIP MCA) with Napster at the same time. I know that has to do with fitting mini-sized chips in a small casket, but look at the numbers… An industrial handheld barcode scanner with Windows CE/Mobile for example often runs on an ARM 533 mHz processor, with 128 MB RAM. That should be faster than the good old Pentium in theory. In practice, it doesn’t even run a single (.NET) program at decent speed. And if it crashes, it just hangs instead of getting a cool blue-screen. What a rip-off!


Software: Culture of Greed
All in all, the same old tasks are just as slow as 14 years ago. Except that the devices aren’t that big anymore, and websites, explorers, Word editors or email programs have more features and a "slick" look now. And sure, computers are doing a lot more simultaneously these days, it’s not the hardware's fault. And in my case, the computer partially got slow due the huge amount of programs. Virus scanners, Dropbox, creative tools + their drivers, and about 20 different programming tools. Now Delphi is pretty nice and quiet, but others interfere with the system like a meddlesome aunt.

Yet, that shouldn't be a problem if all programs back of as long as I don't call them. But each of them dumps crap in the register, has invisible stuff going on, bothers with all kinds of extra functions, and acts like a spoiled princes claiming all computer resources. That might be the main problem; the programming philosophy. I can’t speak for all, but it seems designers lend on the “infinite” resources of nowadays computers. 4 Gigs of RAM, so reserving 100 Megabytes more just in case can't hurt right? The user probably likes our cool product to start-up automatically, and check for updates in the background. Processor speed? Who cares, dual cores mate. Again, all of this wouldn't be a problem, if there weren't many more programs trying to do the same simultaneously. You can't have 10 kings on one throne. Yep, having more resources makes lazy, and I speak from experience since I’m also familiar with the other side; a few megahertz microcontrollers with 1kb memory. Such systems force you to make smarter solutions, caring about each bit. While on a modern PC, you can turn a simple application into behemoth for the sake of “easy maintainable programming”.

Microsoft should know better with their Live and Internet Explorer products. I can’t really speak for Windows 7 yet, but Vista gave a wrong example as its programs were using too much memory and CPU cycles as well. Why does Live Mail work on web-based technologies anyway? It's asking for performance problems. Sure, it may give some more options in these modern cloud / social-media / device-synchronizing times. But in the end, I just want check or write a mail and don't give a crap about all those features unless I explicitly ask for it. Usually I'm using Notepad over Word, or the old PaintShop V over PaintShop XI. You know why? Because it starts in a second, rather than half a minute, including loads or prompts and questions. Software-designers should try to make things more simplistic again, don't you agree?

Ah, that's better.