flipCode - Tech File - Kurt Miller [an error occurred while processing this directive]
Kurt Miller
Click the name for some bio info

E-Mail: kurt@flipcode.com
http://www.flipcode.com/kurt/grain/



   08/16/1999, Lightmapping Revisited (And My (Re)Introduction)


Yes, I've decided to try the whole tech file thing again. You may remember that I had a tech file here in the past, but I removed it and started my personal rambling page primarily because I wasn't updating my tech file very often at all (and didn't want to set a bad example). Well it turns out I haven't had time to update that page much anyway, but when I do it will likely be personal garbage or specific engine-related stuff whereas I'll try to keep the information in this tech file more general. Although I can't really guarantee I'll be updating this one very often either, I'll certainly try. Anyway, that's that. I'll get on to the real update.

If you were visiting the site when I posted my original tech file, you may have noticed that I was rambling about lightmaps at the time. I never did update it after my initial post, but just recently I returned to the code and rewrote it again. Besides that, I see people asking how to do lightmapping all the time and I've NEVER seen a complete doc that left the reader with enough to go and implement it. This does not neccesarily mean that this will be that doc, but hopefully I can explain things clearly enough for you to figure out your own implementation... or at least help you come up with some new ideas.

We'll start with the very basics.


What is lightmapping?


For our purposes, lightmapping can be loosely described as the process of using a lightmap, essentially the same as a standard texture, to store information about the illumination for a surface. Each surface in our 3D polygonal world has its own lightmap, which obviously makes things difficult for memory when you have large worlds. Typically the lightmaps are stored at a lower resolution than the textures, for example 64x64 or 32x32, to keep things trotting along nicely. The lightmaps are combined (multiplied) with the texture either at render time (if you're using hardware rendering) or via a surface cache to produce what appears to be an "illuminated scene". The texture and lightmap are typically not combined into one during the lightmap creation process because even though each surface has its own unique lightmap, it usually does not have its own unique texture. Take for example a room where all of the walls use the same brick texture. If you combined the lightmap and texture for each surface at lightmap creation time, you would be wasting an incredible amount of memory because you'd be making a new copy of the brick texture for each and every surface. That's often impossible (or just plain crazy) in most quake-style games because of the amount of texture re-use among surfaces. Here is an example of what a texture combined with its lightmap might look like.
Texture Lightmap Texture * Lightmap
As you can imagine, the resulting image in a game using lightmaps + textures looks far better than textures alone and creates a sense of depth because lightmaps can also be used for shadows. I'm sure you've probably played a game like Quake before, walked underneath stairs or in corners, and noticed how dark it is. That's the sort of thing you can achieve with lightmaps. Imagine that same world with just bare textures. Yuck! ;) The two major drawbacks about this sort of lightmap engine are first the memory requirements, and second, this is only really practical for static lighting. All of the work is done before-hand which makes it fast at render time, but it also makes it a pain to alter the lightmaps. That's not to say dynamic lights are not possible. You can easily use a seperate system for dynamic lighting on top of your lightmap base. Its not often anyway that the whole world will need its lighting information recalculated, so in many cases its worth it to pre-calculate pretty lighting (ie, using a technique such as radiosity).



How do I generate my lightmaps?


Please note that this section is on generating lightmaps, NOT lighting. You can use any kind of lighting you want (ie, lambertian equation, radiosity, whatever) but that's up to you. This section will simply explain one way of doing it. There are various ways of doing even that, so if you have a better way, I'd love to hear. The method I'll describe below is assuming that you're calculating lighting values at each sample point, but if you understand this... it should be easy to extend it for whatever your purpose may be. For starters you need to create a blank new texture for each surface that you're working with. Lets assume that we're dealing with one surface right now. You can make the texture's size based on any factors that you want, or even fix it at a certain size (ie 64x64), but keep in mind that several 3D cards require power-of-2 texture sizes. Obviously the same applies for lightmaps since they are in fact just textures. That new texture is our lightmap. Now that you have your lightmap allocated and ready for a particular surface, what we want to do is store a value at each lumel, referenced as a 2D point: lu, lv in the lightmap, that represents the illumination value at that location along the surface. I hope that makes sense. Its a little hard to explain if you don't understand up to this point, but I can clarify if you wish.

Now that's all well and good, but the most common question I've heard about generating lightmaps has got to be: How do I get from 2D texture (lightmap) coordinates to world space (so that I can calculate the value for that lumel)? You DO have access to the 3D points at the vertices, so its not entirely 2D texture coordinates to world space (that would be impossible; many surfaces have the same texture coordinates). So then, how do you do it? This bothered me for a while because I wanted an "exact" solution (using the exact orthogonal uv vectors) for arbitrary surfaces, but as it turns out an "exact" solution in many cases wouldn't look right because the textures don't line up. So I resorted to planar mapping. I honestly have no idea if this is the method that most engines use, but I'm guessing it is. The method I use to do this is nothing new or special but it works very nicely. If you would like to know more about planar mapping, check out the archives on The Fountain Of Knowledge. Paul Nettle explains (in more than one response) how planar texture mapping works and why it is good. This would be nice to read if you don't know what I'm talking about right now. To generate my lightmaps, I basically do the following for each surface (minor clarifications added 08/18/99 by request):
  • Determine the primary axis plane based on the absolute value of the normal for this surface.
  • Assign the UV vectors based on this plane. This axis plane, the one that the surface is nearest to in orientation, is the one we will use for the planar mapping, so we assign the surface the U and V vectors of the plane (ie, the normalized vectors pointing 'right' and 'down' respectively along that plane. You don't have to use 'right' and 'down', but it makes more sense to trace in those directions as you'll see in the loop below).
  • Get the min and max (2D vertices) along that plane. This means to use only the 2 relevant coordinate components for this particular plane. For example if you're using the XY plane, only x and y at the vertices count for this step.
  • Determine the texture plane origin (p) from the min
  • Determine the u and v lengths (max-min (on the plane))
  • Determine lightmap coordinates:
    (for each vertex) ltu=((pu - minu) / ulen) and ltv=((pv - minv) / vlen)
    where pu and pv are the x and y in PLANE space (not world space).
    pu and pv are nothing new, they are the 2 relevant components at the vertices for this particular plane. The 'x' and 'y' in relation to the plane.
  • Now you have the UV vectors and lightmap coordinates to do whatever you need. The actual sample point determination for each uv when generating the lightmap can look like the following in psuedo-code, unoptimized for clarity:
    
         usteps = ulen / [lightmap_w];
         vsteps = vlen / [lightmap_h];
    
         for(int ly=0; ly < [lightmap_h]; ly++)
         {
                 for(int lx=0;  lx < [lightmap_w]; lx++)
                 {
                        xs = uvec * (lx * usteps)
                        ys = vvec * (ly * vsteps)
    
                        sample= xs + ys + p
    
                        ... actual lighting calculation at this sample point
                        ... store the result at position (lx, ly) in the lightmap;
                 }
         }
    
    The sample point is in world space and that's what I use for lighting or whatever else. There are many obvious optimizations that you can make to the above, but the idea remains the same.

    I hope I didn't make any mistakes in the above explanation. If I did, its likely something I typed wrong or forgot to type because the algorithm works (as far as I know ;). This approach is very easy and the results look nice. Obviously my worst case would be when a polygon is slanted at 45 degrees to some plane, but it still doesn't look too bad. I was talking to a fellow coder, Luke Hodorowicz, about an approach like this that he's working on as well and he agrees that the skewing isn't really that noticeable even in those worst case situations; especially in conjunction with techniques like bilinear filtering.

    Again, the lighting performed at that sample point can be anything you wish, but to get your feet wet, if you want to use the lambertian lighting equation, you can: take your sample point and subtract that from the light position, normalize the result, then dot it with the (normalized) polygon normal. That's the bulk of the calculation. That dot product is the important one for lambert, and you can use it with as many other factors as you'd like, for example I have a distance/radius ratio in there. That's up to you -- experiment with it or look up the exact formula if you seek exactness. Often times in game programming, coming close enough to 'realistic' but fudging it with a hearty helping of 'looks good' sauce works nicely.

    Here's a quick example that I threw together for illustration purposes only:

    x =

    The above scene is a triangle mesh with 77 verts and 44 faces. There's a very strong blue light and a faint purplish light as well. Note the difference in "realism" or "depth" between the plain textured version and the final version.



    Rendering with lightmaps...


    I must admit that when I first implemented lightmaps several monthes ago, I was already using 3D hardware for rasterization which means I've never written a surface cache for software rendering, so you'll have to look elsewhere for information on that. Its a straight-forward idea, so it shouldn't be overly difficult, but I won't describe it here. Check the reference section at the end for links.

    I suppose you could still use a surface cache if you're using hardware, but its not really all that neccesary. Most 3D hardware these days can perform single-pass multitexturing (or you can resort to two seperate passes). This is what I use at this time. You simply need to set up your multitexturing function to multiply the texture and lightmap together at render time and you'll have your nicely illuminated surface.



    Other questions and answers...


    How do you generate color (RGB) lightmaps?

    Piece of cake. One idea would be to simply generate a "red", "green", and "blue" lightmap seperately (in the same loop of course) then combine and clamp them when you're finished. Using this approach you can easily use arbitrary rgb colored lights.



    How come I can see my lightmaps just as clearly as my textures after the final render?

    Hehe, take a look at this screenshot... a portion of one of my very first lightmap shots (quite old).


    That's part of the ceiling in a small world. See how you can clearly see the rings of the lightmap? I started getting nervous and began to think I should dither/filter/blur/something them. The reason for this is not that I was doing anything wrong. The answer, which my buddy Mr. Nettle mentioned to me a while back, is simply because those textures aren't textured enough. I grabbed those textures from a 'free web textures' site, and thus they're aimed at web site backgrounds, not games. If you view textures from various games such as Half-Life or Unreal, you'll noticed how very textured they are. This greatly improves the image quality when rendering with lightmaps. After kicking myself, I downloaded some free GAME textures to play with (references at the end of this doc) and of course everything looked beautiful. You can filter your lightmaps, and if you're using basic shadows (next question..), you definitely should. But to be honest, they still look great without it if your textures are sexy enough. At least to this coder.



    How do I handle shadows?

    That completely depends on how you're doing you're lighting. If you're using simple point light sources and simple lighting equation calculations, you can easily determine where shadows should be by checking, for each point in the lightmap, whether the light actually reaches this particular surface at this particular point (ie, if any other polygon in the level is blocking it or not). That involves much ray casting and many intersection tests which means its quite expensive, but keep in mind that this is all still computed offline and stored in the lightmap hence it won't really affect your actual rendering performance. This process can also be optimized fairly well with a few simple 3D tricks. If you use this approach for shadows, I highly recommend that you filter your resulting lightmaps to get rid of the hard edges and give the scene a much more natural look.

    This is of course not the only way to determine shadows.



    How do you do volumetric fog with lightmaps?

    This is another easy add-on when using a lightmap-ish architecture. You can use a seperate set of lightmaps (usually much smaller than the texture because the fogging is view-dependent and thus expensive to calculate repeatedly) and store fogging values, which are usually alpha values, then combine it with the actual surface at render time giving the illusion of volumetric fog. The fog values are determined by a few simple intersection tests with your fog volume. For more information, check out the the reference section at the end of this doc.



    Closing...


    In this doc, I've attemped to explain the idea behind lightmapping as it is used in the majority of the modern 3D game engines currently on the market (fps engines anyway). The lighting itself and implementation details/concerns vary and are up to you, but hopefully you now understand how to generate and use lightmaps. I make no claims as to the validity of the info in this file. Its mostly from personal experience, so I could be wrong. There are as many ways to do this as there are kernels of popcorn on a crowded movie theatre floor, so if you've got a better approach or suggestions, by all means please let me know. Don't just sit there and say 'he's doing things weird'... actually tell me and I'll update this doc if neccesary. I consider myself a student of the arts (3D that is) and I'm always looking for better, more efficient, and more correct ways to do things. Anyway, please let me know what you think of this doc. I hope you found it useful because I've tried to create a doc on a subject which at this time is mysteriously under-documented on the internet. Thanks for reading.

    Kurt Miller
    http://www.flipcode.com


    Sources / Further Reading


    Color & Lighting by John DiCamillo
    This is literally the only other site I ever remember seeing on the internet with solid info on light mapping until very recently. It does a nice job explaining the concepts, but its a few years old (using palettes and software rendering) and the psuedo-code has some html problems. Other than that, definitely check this one out if you're looking for a good introduction to lightmaps.

    The Fountain Of Knowledge: Lightmaps
    The Fountain entry on lightmapping by Paul Nettle has some solid information on planar mapping and the fundamentals behind lightmapping in response to a question that someone sent in. You can find this in the archives section.

    PolyGone: Various Lightmap Docs
    I came across these docs on Tobias Johansson's web site only very recently. There are some interesting ideas about light mapping including a piece by Warren Marshall which describes how to obtain the world space sample point using a bounding box. I haven't really read over these docs in detail yet, but if you're interested, you can find them there.

    Spherical Volume Fog
    This document covers volumetric fog (using fog maps) in much more detail than I mentioned.

    The Texture Studio
    Most of the textures I use in my engine demos came from this site (with permission). He's got some excellent game-ish texture pack releases available for download. Very highly recommended.

    KAGE Document On Surface Caching
    Here's another doc written by Paul Nettle, this time about surface caching in the KAGE engine. Again, I don't use a surface cache, but if you're interested in the topic then you might want to try here.

    Also, greets/thanks go out to various people that I've talked to about lightmaps and related topics.







  • 03/15/2000 - Miscellaneous Jargon
  • 02/20/2000 - Bits And Particles
  • 09/09/1999 - Interface Design Ramblings
  • 08/16/1999 - Lightmapping Revisited (And My (Re)Introduction)

  • This document may not be reproduced in any way without explicit permission from the author and flipCode. All Rights Reserved. Best viewed at a high resolution. The views expressed in this document are the views of the author and NOT neccesarily of anyone else associated with flipCode.

    [an error occurred while processing this directive]