Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Paul Nettle, posted on October 19, 2001




Image Description, by Paul Nettle



I guess this is just as much a public announcement as it is an IOTD. So if you're not interested in any announcments I might be inclined to make, then feel free to ignore the text and enjoy the pic.

I've been working on some radiosity code lately. I've paid a lot of attention to accuracy and correctness. In the process, I found myself comparing my results to the original cornell box results.

On the left, you'll see the original cornell box image (as generated at Cornell University) and on the right, you'll see the one generated by the new radiosity processor I've been working on.

If you look closely, you'll see that the images are not exact. This is for a few reasons. First, the original cornell box was not processed with RGB light, but rather using a series of measured wavelenghts. So I guessed at the RGB values and surface reflectivities when I generated my image. I also didn't bother matching their camera position and FOV -- I just "moused" it. :) The image on the right was post-processed with a gamma value of 2.2 -- no other post-processing was used. Finally, the image on the right is rendered using lightmaps.

I don't know how long it takes to generate the original cornell box, but it takes me about 12 seconds on my 1Ghz P3. It required over 1200 iterations to reach a convergence of 99.999% (theoretically, you'd need to run an infinite number of iterations to reach a full 100% convergence.) If you're impatient, a very reasonable result (negligible difference to the one above) can be had in just a couple seconds.

About the tool:

It's also a lightmap generator/packer; it will generate tightly packed lightmaps for an entire dataset and generate proper UV values for the lightmaps. In other words, you want lighting for your game, you just hand your geometry to this tool, and it spits out your lightmaps and UV values.

A lot of new concepts went into the development of this tool:
  • No use of hemicubes or hemispheres; uses a completely analytical solution with a perfect visibility database generated by a fairly well optimized beam tree. There isn't a single ray traced in the generation of the image above.

  • Geometry is stored in a clipping octree, and at each octree node, a clipping BSP tree is built. The BSP uses the a new generation technique that has proven to vastly improve the speed of building the BSP as well as great reduction in tree depth and splits. From this, we perform the radiosity process on the precise visible fragments of polygons.

  • New adaptive patch technique which is anchored to the speed of the processing itself. As with all progressive refinement radiosity processors, the further along you get, the slower it gets. This adaptive patch system is keyed to the progress, and almost keeps the progress running at a linear speed. This trades accuracy for speed, but only when the amount of energy being reflected is negligible. This is also completely configurable.

  • Other accuracy options, including using the actual Nusselt Analogy for form factor calculation (about 5% slower, much more accurate.) Also, for accuracy, every single light map texel is an element, and it gets this almost for free. :)
  • I'm considering doing a 12-week course at GameInstitute.com on radiosity somtime in the future (if they'll have me :). The course will most likely include this technique as well as other common techniques. It will probably also cover lightmap generation as well as the other techniques (new and old) employed in this tool.

    At some point, I also hope to release full source to this tool to the public.


    [prev]
    Image of the Day Gallery
    www.flipcode.com

    [next]

     
    Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
     
    Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
     
    fluffy

    October 21, 2001, 12:00 AM

    Sheep, you've never heard of polygon clipping or beamtrees or the like? What you can easily do is for each patch, clip it against a BSP or beamtree or the like to determine what's visible of it, then calculate the visible portion's area. I'd imagine that Midnight's uberleet visibility algorithm which he's been so secretive about for as long as I've known him (hint: it predates the launch of the original Voodoo Graphics) would make this very simple.

     
    lycium

    October 21, 2001, 12:54 AM

    the things you find on programming websites...

     
    Thr33d

    October 21, 2001, 03:46 AM

    Impressive, espcially the minimal time required to compute such realistic results.

    The only differences I see (other than camera position and slight gamma difference) are that in the left image (original) the global illumination seems slightly higher. The corners of the 'box' and the shadowed areas tend to fade out more slowly (lack of light) than the image on the right. Also, probably due to your use of rgb, the green in your image (left wall) seems (and is) brighter than that of the other image (in comparison to the other walls). I'm not sure of the exact cause of these differences, and I'm sure they're unimportant unless you're going for scientific percision.

    Anyway, excellent tech, a nice addition to the IOTD, and I hope to see more of your work on this program down the line.

    -Michael

     
    Manuel Astudillo

    October 21, 2001, 04:43 AM

    Hi,

    Watching this results I was just thinking how many people is able to code a radiosity renderer compared to how many people is able to code a lighmap engine...
    And well, it seems to me that most of the people wont implement a radiosity lightmap generator, so it actually would be very positive for the scene if somebody could create a tool like the one above and release it to the public. I think that a lot of people will greatly appreciate it since it will dramatically increment the quality of the images produced by most of the engines out there.

    just an idea.


    greets,

    Sarwaz.

     
    MK42

    October 21, 2001, 08:03 AM

    This looks really cool. 12 seconds is quite fast. Your use of beam trees caught my attention. How do you use them? Since they are used in the same item as hemicubes/-spheres I assume that you use the information gathered from the beamtree for formfactor calculation. Beamtrees are great for extracting the visible fragments of polygons for a given point (point-to-area visibility), but a form-factor is a representation of area-to-area visbility. How do you use the beamtree for that? Do you approximate a patch just by it's center point, then I see how you do it, but maybe you figured out a better way ... just curious ;)

    - Marco

     
    Crowley9

    October 21, 2001, 08:45 AM

    Hi.. Great pics. 1 Question though: I noticed that you claim
    to compute "perfect visibility" analytically. Can you please
    elaborate on this? To my knowledge, this is an extremely difficult
    problem.

    Many thanks..
    Bye..

    -- Crowley 9

     
    MidNight

    October 21, 2001, 11:11 AM

    Each of my answers gets longer and longer... *sigh*

    About the visibility:

    Yes, it's from the center of a patch. This is why the patch resolution is so important (using a 1:1 ratio of patches:elements early on in the process.)

    No, it does not use any visibility algorithms that I've been working on in the past.

    It uses a self-closing beam tree. I call it that, because I've never seen any references to an actual beamtree implementation. Maybe all beamtrees are "self-closing"? Basically, it boils down to being able to detect when a branch of the beam tree is "closed" (occupied by polygons) and closing it. This helps a bit, because beamtrees are (by nature) quite poorly balanced.

    This beamtree is used in conjunction with the octree -- octree nodes are tested against the beam tree and if visible, they are inserted.

    Currently, the octree-tests are not implemented, so it scales horribly. Even still, it only takes about 30 minutes to process a 10K polygon scene. Of course, you have to use the right settings to get good quality with that kind of performance. If you crank up the quality (beyond that required for lightmaps), you can get it to spend hours on a scene. I think this is good, because it allows the user to choose how long they want to wait for the results.

    I call it an analytical solution because, historically, an analytical solution was to simply calculate form factors and do the work, without regard to visibility. The beamtree gives me this -- I get a list of visible pieces and then "calculate form factors and do the work." This means there are no precalculated form factors. I considered adding a cache of this information, but for large (read "practical") scenes, this cache would have very few hits. I may still add it. I haven't decided yet.

    Arne asked about the 99.999% convergence: Yes, it is important. On many scenes, you'll spend half of your time distributing the first 98% of energy and the other half of your time processing the last 2% of energy. That last two percent is just as important, because the beauty of radiosity is in the details. Those subtle details that trick your brain into thinking it's looking at reality. If you actually watched the color values for a single lightmap pixel, you would see the first iteration take the lightmap color from [0,0,0] to [189,100,208]... a lot of light for a single iteration. If you watch the last few iterations, you'll find that it might take 20 iterations (of very small fractional increments) to increment one of those RGB component values by a single integer value. After a few thousand iterations, all of those small increments add up to a proper solution.

    I think I answered most of AGPX's questions, except this one: I store store a quad of floating-point values per lightmap texel. The first three values are the RGB (floating point resolution) color for the lightmap texel, and the fourth value represents the area of that texel, when clipped to the polygon. This area is important, because it is necessary in deciding, when a texel is visible, how much of that texel is visible for proper anti-aliasing.

    About the form factors:

    The form factors are done analytically. I actually run through the formal form factor calculation per element, per pass. I was pleasantly suprised at how slow this ISN'T. :) For the standard evaluation, I use:

    Fij = (cos(Theta_i) * cos(Theta_j)) / (PI * distanceSquared) * Hij * dAj;

    I do this for every visible texel of every visible polygon (from the beamtree) during every iteration. Note that none of the values in that equation are cached, and must be calculated on a per-vislbe-texel basis. This is a "double estimation" in that it estimates energy leaving only the center of a patch, and arriving only on the center of an element.

    There is a way to improve this estimation (which is an option in this rad processor) that calculates the energy leaving only the center of a patch and arriving at the entire visible surface area of an element. This is called Nusselt's Analogy. If you don't know what this is, look it up. :) That calculation is quite a lot more involved, which includes building a polygon of the texel, projecting it onto the surface of a unit-hemisphere, then projecting that down onto the unit-circle of the base of that hemisphere, and then calculating the area of that final projection. Yikes! :) But it's only about 5% slower.

    All in all, the processor spends very little time doing all of this.

    Both of these are MUCH more accurate than the standard hemicube/hemisphere. First, there are no aliasing artifacts associated with the two techniques. Second, the two techniques work by adding very small fractional values to an element, which eventually add up to the proper amount of energy. Unfortunately, a lot of information is lost with such small values, even when using double-precision floating point (something I learned the hard way with the KAGE rad processor.)

    Manuel commented on the combination of lightmap/rad. This processor is specific to lightmapping (the actual emission routines emit energy right into the lightmaps.) So if you're not using lightmaps, this would be pretty useless without a lot of modification.

    On the topic of lightmaps, I have to admit that I'm quite proud of the lightmap packer's region fitting algo. I don't know if this algo already exists -- I've always been under the impression that region fitting algos were slow and didn't work very well. This one is quite the opposite. It's fast and effective. It works by calculating the a complex polygon (formed by adjoining, coplanar polygons in the scene) and inserting them into the lightmap. It then fills in the holes and gaps with a recursive, but rather simple algorithm.

    You can see what this looks like here. This is a 128x128 lightmap upsized by a factor of four so you can see it. Each individual polygon gets a unique color assigned to it. The black-pixels are unused within the lightmap. Note that a polygon _must_ have a single-pixel border around it on all sides (for bilinar filtering purposes), so a 1x1-pixel region really requires a 3x3-pixel region of the lightmap. Note that the polygons rendered into that example image include their 1-pixel border. So, much of the unused (black) areas are unusable. I compard this to the Quake lightmap packer, and this one outperforms it (in terms of storage) by about 25%. This lightmap packer also doesn't colapse solid-colored polygons into a single pixel like the Quake lightmapper does - I may add that later. Add to that, the fact that this lightmapper doesn't require a post-process low-pass filter, which means we're able to decrease our texel density by 50% and achieve the same results (actually better) and we end up saving more than half of the lightmap memory usage. And while I'm on the topic of bragging about the lightmap packer, it's fast too. :)

    I also noticed that the Quake lightmapper does some strange things - for example, it will combine polygons that are planar, but do not have the same D value. Think of a floor, like a chess board, where the red squares are recessed into the floor. The top-faces of all blocks (red and black) would be merged into a single, large lightmap region. This means that they also share bilinear filtering through the lightmap. Because of this you get dark halos on some of the top blocks and the lack of shadows on the blocks that are sunken. If you compare results of doing this versus not, you'll find that you lose a lot of information by doing this. They probably did this to improve their lightmap storage.

    A quick note to Crowley9: perfect visibility is not difficult, just slow. This isn't a realtime application, so I was afforded a bit more leniency on the visibility performance.

     
    MidNight

    October 21, 2001, 11:20 AM

    I've got a few test renders that I did of some simple (read: programmer art) scenes... you can view them here. This includes a full resolution version of the image of the day.

     
    =[Scarab]=

    October 21, 2001, 11:34 AM

    WOW! That link contains some mighty impressive stuff. And I thought the IOTD was nice. :) I can't wait till I get advanced enough to do stuff like this. (For now I'm working on my little OpenGL 2D space shoot'em up. ;)) Radiosity and photonmapping never cease to amaze me.

     
    Sheep

    October 21, 2001, 03:21 PM

    Sorry, I think I expressed myself badly.
    I was wrongly assuming that this program was using correct form factor evaluation as this is a fairly ugly problem that is difficult to solve (ie. surface to surface rather than point to surface). This solution makes some of the same approximations as other radiosity techniques (though very nicely avoids the main aliasing problems with the form factor generation).

     
    Pinky

    October 21, 2001, 03:21 PM

    Jiri Bittner did publish a paper with an analytical solution to the area visibility problem BTW. He does said it was the first analytical solution to his knowledge so it indeed seems a difficult problem.

     
    Mattman

    October 21, 2001, 04:04 PM

    It's ashame that code you posted returns false. :-P

     
    Laurent MASCHERPA

    October 22, 2001, 04:22 AM

    Very nice results, I must try a global illumination soon...

     
    Crowley9

    October 22, 2001, 06:47 PM

    Pinky: Not only is it difficult, but Jiri Bittner's solution
    only solves for visibility in the plane (or at least 2.5D) ;)

     
    deadsoul

    October 26, 2001, 03:24 PM

    When can we get our fithy paws on the tool ?
    Is it gonna be publicly available or
    is it gonna cost ? if so how much ?

    - Deadsoul

     
    MidNight

    October 26, 2001, 09:44 PM

    Full source and binary is now available here.

     
    The Legend

    October 28, 2001, 02:08 PM

    While browsing through that page an old question came back into my mind: How would I define, or fake, for example, a triangular lightsource for real-time rendering when I'm limited to an API like OpenGL?

    The Legend

     
    This thread contains 47 messages.
    First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
     
     
    Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.