Sunday, October 17, 2010

Another take on lightshafts

The original two-month planning for that movie was a little bit optimistic. But if you ever did some programming you’ll know that is a common mistake. Especially when working on something new. Someone asks how many weeks you need to create some shitty importer tool. “Weeks?! Minutes!”, you shout proudly. But before you know you wasted 24 hours already on some stupid pointer bug, or due the ever changing list of demands. Unforeseen sequences.

But I can pretty safely say now that the movie will be finished somewhere half November, and that even includes some margin. Unless... another pointer error will strangle me. Or recording with FRAPS becomes a pain in the ass (I'm kinda worried about the framerate, full-screen & sound). I won't have each and every point accomplished as I would like, but as a programmer you'll have to learn to make decisions and finish things, otherwise you'll stay in the early-alpha-development stage forever. There is always a bug, something that could have done better, or a new feature you would like to add last minute... such as improved fog & lightshafts…

I stumbled over this nVidia paper, and with my love for the graphical aspect of game programming, I couldn't resist. Improved volumetric light/fog is definitely one of the most important effects I'd like to realise soon. It can't hurt to have a quick peek right?

An older shot with a crunchy tobacco filter. The volumetric light here is just an additive cone. Still, this effect can dramatically improve your scenery. Don't mind the pink picture btw, that is the dynamic ambient lightmap (one of the things to replace in 2011)

For the info, volumetric light is the effect you typically see when light falls in a dusty room. You can't see light rays of course, but when the air is filled with tiny particles such as dust or smoke, the light will collide and scatter a little bit. Fog is basically the same effect, light collides with (local) clouds of condense. However, graphical applications do not have this effect by default, as lighting is usually calculated on the receiver surfaces (walls, floors, objects --> polygons), not tiny invisible particles floating in the air. We could fill the space with millions of little dots and apply lighting on them, but that would be tedious.

So far I considered two ways of doing volumetric lighting (except from simply modeling a lightshaft trapezoid with a dust shader into the scene). First there is the traditional pile-of-quads method. Wherever you expect lightshafts, you can create a "volume" of receiving geometry. Not tiny particles, but a (dense) stack of transparent quads that lit up where the lightrays intersect. By mapping the shadowmap of a lightsource onto those quads, you can get quite accurate results. However, the downside is that you need to put damn quads everywhere, which is especially nasty if the light can move around. Think about a day-night cycle where the sun shines into your building from many different angles. On top, having lots of transparent quads also takes its toll on the performance. Using less quads can fix that, but that would give jaggy results again.

Another method is a post effect, using blur streaks. This is what my engine currently uses.
1- Render a darkened variant of the scene. Only lightsource(flares) and the skybox are colored, the rest is black.
2- For each screen pixel, step over the screen towards the lightsource (2D) position. When passing bright pixels, add them to the result pixel (based on distance and strength settings). This creates streaks around bright the spots you rendered in step 1.
3- Blur the results to smoothen.
4- Render the result on top of the normal scene (additive blend).

This can give quite good results for a cheap price. But there is one serious drawback though, you'll have to see bright pixels (skybox or the lightsource), otherwise there is nothing to smear out either. It works for open area's, such as the skylight falling in through the treetops. But for hallway's with windows on your left or right side, it's not working unless you can actually look outside through the window.



Here's a third method. Not really new either, but maybe due hardware performance limitations not used that much yet. It's also a post-effect, that "only" requires a depth buffer of the scene (from the view perspective) and a shadow-depth map from one (or more) lights. I already had these ingredients so that shouldn't be a problem. Next step is to fire a ray into space for each screen pixel. You know how far it will travel thanks to the scene-depth buffer. Now take a (big)X amount of samples alongst the ray path towards the ray collision point. For each sample, check if it intersects a light volume. Or other types, such as local fog volumes. The more samples the ray hits, the more dense the shaft/fog or whatever kind of volumetric effect for that screen pixel.

Don't mind the crappy picture. At least I can understand it.
Intersections can be checked in a similar way as shadowMapping. For each raystep, calculate the world position and check if it can be “seen” by the light by comparing depth. Or check if the ray position is inside a fog volume. The theory is pretty simple, however, making this as a practical realtime effect is another story. I can't tell you all the implementation details, as I simply haven't coded it yet. Besides making a fast raycasting pixelshader (plus some additional shaders to blur and upscale from a smaller input buffer), I'm facing another problem here... Environment awareness.

Demo programs are usually fixed on a simple, small static location. But in a large complex environment, the settings can differ for each square meter. Some hallways are filled with dust while others are clean. One sector has slight orange fog while its neighbor needs thick blue smoky fog.

I was thinking about encoding “particle info” into a 3D texture, but unless you’re making dynamic cascaded textures like Crytek does for its new ambient technique, you’ll run into problems as well. 3D textures can’t be too big, and as you move in a big roaming world that texture needs to move with you as well. I think I have some easier / cheaper tricks to inform the shaders how to render shafts & fog, but I need some more time to work it out. How about… two months?

Finally some quality TV. In case you wondered what I did last week; the curtains somewhere in the background. Blurred out by much Depth of Field.

2 comments:

  1. I think you can easily find a job in the gaming industry if you show them what you worked on all these years.. :D

    ReplyDelete
  2. Who knows, then again my math skills are pretty weak. But asides from that, I'd like to design games rather than programming someone else's idea (unless it really kicks ass, but I'm very picky :) ). Thinking out your game is the real fun part, programming is just a tool to realise it. Although that can be great fun as well of course.

    Being Myamoto or something... that would be wonderful... Oh well, I'd better wake up and get back to designing harvesting machine interfaces for work again :)

    Cheers!

    ReplyDelete