Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Paul Nettle, posted on October 19, 2001




Image Description, by Paul Nettle



I guess this is just as much a public announcement as it is an IOTD. So if you're not interested in any announcments I might be inclined to make, then feel free to ignore the text and enjoy the pic.

I've been working on some radiosity code lately. I've paid a lot of attention to accuracy and correctness. In the process, I found myself comparing my results to the original cornell box results.

On the left, you'll see the original cornell box image (as generated at Cornell University) and on the right, you'll see the one generated by the new radiosity processor I've been working on.

If you look closely, you'll see that the images are not exact. This is for a few reasons. First, the original cornell box was not processed with RGB light, but rather using a series of measured wavelenghts. So I guessed at the RGB values and surface reflectivities when I generated my image. I also didn't bother matching their camera position and FOV -- I just "moused" it. :) The image on the right was post-processed with a gamma value of 2.2 -- no other post-processing was used. Finally, the image on the right is rendered using lightmaps.

I don't know how long it takes to generate the original cornell box, but it takes me about 12 seconds on my 1Ghz P3. It required over 1200 iterations to reach a convergence of 99.999% (theoretically, you'd need to run an infinite number of iterations to reach a full 100% convergence.) If you're impatient, a very reasonable result (negligible difference to the one above) can be had in just a couple seconds.

About the tool:

It's also a lightmap generator/packer; it will generate tightly packed lightmaps for an entire dataset and generate proper UV values for the lightmaps. In other words, you want lighting for your game, you just hand your geometry to this tool, and it spits out your lightmaps and UV values.

A lot of new concepts went into the development of this tool:
  • No use of hemicubes or hemispheres; uses a completely analytical solution with a perfect visibility database generated by a fairly well optimized beam tree. There isn't a single ray traced in the generation of the image above.

  • Geometry is stored in a clipping octree, and at each octree node, a clipping BSP tree is built. The BSP uses the a new generation technique that has proven to vastly improve the speed of building the BSP as well as great reduction in tree depth and splits. From this, we perform the radiosity process on the precise visible fragments of polygons.

  • New adaptive patch technique which is anchored to the speed of the processing itself. As with all progressive refinement radiosity processors, the further along you get, the slower it gets. This adaptive patch system is keyed to the progress, and almost keeps the progress running at a linear speed. This trades accuracy for speed, but only when the amount of energy being reflected is negligible. This is also completely configurable.

  • Other accuracy options, including using the actual Nusselt Analogy for form factor calculation (about 5% slower, much more accurate.) Also, for accuracy, every single light map texel is an element, and it gets this almost for free. :)
  • I'm considering doing a 12-week course at GameInstitute.com on radiosity somtime in the future (if they'll have me :). The course will most likely include this technique as well as other common techniques. It will probably also cover lightmap generation as well as the other techniques (new and old) employed in this tool.

    At some point, I also hope to release full source to this tool to the public.


    [prev]
    Image of the Day Gallery
    www.flipcode.com

    [next]

     
    Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
     
    Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
     
    Politik

    October 19, 2001, 03:06 PM

    What can I say? Its like looking at god.

     
    Hoyle

    October 19, 2001, 03:13 PM

    Pretty colors . . . very cool.

     
    Tobias Franke

    October 19, 2001, 03:17 PM

    ("First Post" == "What can I say? Its like looking at god.") &&
    ("First Post" == "Pretty colors . . . very cool.")




    ;-)

    The lightmapping sounds interessting, I need some for my terrain (dynamic light is sooooooo slow).

     
    ROOKIE

    October 19, 2001, 03:21 PM

    Can you explain to a beginner the significance of the cornell box? I notice that the edges of the boxes in the right image are not as smooth when compared to the boxes in the left image. I also notice some slight lighting differences between the two, though I can't say which would be more desirable. Is there a goal with the lighting?

     
    Raigan Burns

    October 19, 2001, 03:43 PM

    "The BSP uses the a new generation technique..."

    i don't suppose there's any chance for a breif outline of the new technique?

     
    Pinky

    October 19, 2001, 03:54 PM

    What are clipping octree's/bsp's? :) (they fail the google test) Or does that just mean surfaces are split to fit exactly in the space subdivisions?

     
    ector

    October 19, 2001, 04:23 PM

    rookie: noticed the slight glow on the other walls from the red and green ones? you don't get that with ordinary lighting, neither do you get those very nice soft shadows..

     
    malkia

    October 19, 2001, 04:39 PM

    Paul probably didn't render with antialiasing - but that's not big deal.

     
    hurri

    October 19, 2001, 04:43 PM

    Looks very good, but I'm much more interested in how you got there. Sign me up for the course!

     
    Dr.Mosh

    October 19, 2001, 05:02 PM

    I think its rendering artifacts due the lightmaps

     
    Ant

    October 19, 2001, 05:02 PM

    Hmmm... I too was unfamilar with what a cornell box was and found out that, "computer graphics simulations will never become predictive of reality unless we correctly model the physics of light reflection and light energy propagation within physical environments." To break it down, I think it means since everything reflects light, basicly everything is a light source; so everything in an enviornment should reflect (no pun intended) this. The cornell box simulation/experiment/study is an attempt to do this.

    More found here.
    http://www.graphics.cornell.edu/online/box/

    Feel free to correct me.

     
    Dr.Mosh

    October 19, 2001, 05:03 PM

    Exactly my thought!

     
    Aurora

    October 19, 2001, 05:32 PM

    Hey Ant, thanks for that ref. I have played with the cornell box with some shaders before. Its a great standard to work against, like a sphere in raytracing :)

    What I never knew was that they did their work based off an actual physical model and CCD image. Now thats something real to play against.

    Thanks again

     
    MidNight

    October 19, 2001, 05:51 PM

    Can you explain to a beginner the significance of the cornell box?

    The link posted (http://www.graphics.cornell.edu/online/box/) should give you a complete background, but in short, they pointed a camera at a physical box, and then they pointed a camera at a monitor with the rendered image of the box on it. They asked a bunch of people to try to tell the difference, and the results were impressive.

    I notice that the edges of the boxes in the right image are not as smooth when compared to the boxes in the left image. I also notice some slight lighting differences between the two, though I can't say which would be more desirable. Is there a goal with the lighting?

    The edges are aliased because this was rendered in realtime with pre-calculated lightmaps. I don't know what the original box was rendered with, probably a ray tracer. The lighting differences are because I had to guess at the RGB values (read text that accompanies image.) The goal is to emulate physical lighting effects from perfect diffuse reflectors (i.e. surfaces that reflect evenly in all directions.)

    "The BSP uses the a new generation technique..." i don't suppose there's any chance for a breif outline of the new technique?

    Yes, there is. It's been available for some time on my website. The URL is: http://www.FluidStudios.com/publications.html -- look for the link to Fast BSP Tree Generation Using Binary Searches. Note that the document explains that the technique is unproven. Not any more. :) Although, the actual implementation varies slightly from the described technique, it's close enough for government work. :)

    What are clipping octree's/bsp's? :) (they fail the google test) Or does that just mean surfaces are split to fit exactly in the space subdivisions?

    Yes, you're right.

    rookie: noticed the slight glow on the other walls from the red and green ones? you don't get that with ordinary lighting, neither do you get those very nice soft shadows..

    This is called "Color Bleeding". It's one of the things that radiostiy models.

     
    lycium

    October 19, 2001, 06:27 PM

    very nice! i'm a real sucker for global illumination (albiet not with radiosity), so this is a nice change from the landscapes ;) no offence to the landscaper's, it's just my preference.

    how well does this scale to big scenes?

     
    =[Scarab]=

    October 19, 2001, 06:34 PM

    Looks mighty impressive. Next step is to make it real-time. ;P

     
    Jigro

    October 19, 2001, 06:51 PM

    Hey that looks almost exactly like the rendering utility "Lightflow" Hmmm...I wonder...

     
    Dr.Mosh

    October 19, 2001, 07:27 PM

    Well, I guess it is realtime if he is rendering it using lightmaps :)

    I know what you mean tho

     
    Jan Marguc

    October 19, 2001, 08:03 PM

    The images definitely speak for themselves concerning the high quality of your radiosity processor.
    The use of a completely analytical solution makes it very interesting.
    It would be nice if you could provide us with some more detailed information about the scene rendered, in particular lightmap dimensions, patch subdivision level and total number of polygons before octree and bsp processing.

     
    Steve Wortham

    October 19, 2001, 09:03 PM

    It is looking really good. 8-)

    I have a question for you though. Basically, I want to know the difference between radiosity and raycasting. I have read descriptions of both. And I have developed an algorithm that most closely resembles raycasting, but I still don't know the whole story. Have you ever used raycasting to produce lightmaps?

     
    disableddan

    October 19, 2001, 09:10 PM

    Very nice!

     
    Jare

    October 19, 2001, 09:52 PM

    Nice job Paul, good to hear from you again.

    The biggest differences I found between the two pictures are these(let's see how can I explain it):

    - The right side of the right box is colored in red, and the left side of the left box is colored in green. Somehow these colors look "washed out" in the left image, but with more contrast in yours, somewhat darker. Do these colors come as reflected light from the walls? If that's the case then I like your image better; IMHO the reflected colored light shouldn't be so powerful or it would have to bleed much more visibly on the front wall. If these faces are actually painted in red & green, then I'm not sure.

    - In fact, there's hardly any bleeding of red & green light on the front wall's sides (or the floor & ceiling, for that matter), and it doesn't look right in either image. Are material properties purposely defined so? The thing is, if the sides don't reflect enough light to bleed on the walls, then the sides of the boxes, since they aren't being hit by direct light, should be much darker.

    - Also, regarding the same faces of the boxes: the bottom (where they meet the ground, heheheh) is lit continuously in the left image (the edge between the face and the ground is barely visible) while in the right image it's very visible, looks like the box is producing a small shadow. On one hand, it looks better because it looks more natural, but at the same time it gives the impression that the boxes are floating slightly over the ground, which is not real.

    Are those issues significant in any way related to your accuracy issues, and if so, can you comment on their significance?

    I know this sounds like nitpicking, but when you are in front of Something Good, the only comments I can make are (a) praise and (b) nitpicking. "Mine is better" does not quite apply here. :-)

     
    MidNight

    October 20, 2001, 06:21 AM

    It would be nice if you could provide us with some more detailed information about the scene rendered, in particular lightmap dimensions, patch subdivision level and total number of polygons before octree and bsp processing.

    There are only 42 polygons in the scene. The lightmaps are 128x128 and contain combined polygons. I believe the floor uses a lightmap area of 38x38 lightmap texels. Because the visibility result is exact, the lighting is rendered into the lightmaps with precise anti-aliasing precision, so there's no need to sample with multiple rays.

    The original was rendered with 112 patches and 166 elements. The new one was rendered with 8856 patches and 8856 elements (that's a 1:1 corellation of patches to elements.)

    Now.. that's not entirely true.. only mostly true -- this is where the adaptive patch technique comes in.

    Because it is using a progressive-refinement-like approach (each iteration, the patch with the most energy is shot), the first few iterations are very important for quality. If, for example, I had used a patch subdivision of 4x4 (which would be a 4x4 grid of elements per patch), then the patches would be fewer, but the shadows might not look as smooth as they do (remember, the scene is discreetized.) Of course, it is much slower to use a 1:1 corellation. So, the user enters a threshold for energy emission, and when the energy drops below the threshold, each group of 2x2 patches are combined into a single patch (at which point, a patch would be a group of 2x2 elements.) Combining patches like this means the amount of energy effectively quadruples for each sucessive iteration. So the process speeds up. Once the amount of energy being shot for an iteration drops below the threshold (again) then a group 2x2 patches are combined again (now each patch represents a 4x4 grid of elements.) This continues until all energy is shot, or until the size of a patch reaches (another) user-specified limit.

    This may sound severe, and it can be, if used improperly. I tend to use a threshold of 0.01% (i.e. when the amount of energy being shot for a single iteration is less than 1/10,000th of the total scene energy.) And I limit the maximum patch size to 8x8 (64 patches.) But by the time it reaches 8x8, the amount of energy is so negligible, it is only contributing a negligible amount of energy to the scene.

    If I had to guess, I would say that roughly 90% of the energy in the scene was rendered with a 1:1 (patches:elements) ratio. It probably then stepped up to a 2x2 patch size until about 94%.

    I have a question for you though. Basically, I want to know the difference between radiosity and raycasting. ... Have you ever used raycasting to produce lightmaps?

    Yes, I've used ray casting. The results don't compare to true radiosity. :) And if the radiosity solution is accurate, then the results are that much better.

    What is radiosity? Pretty simple... think of a ray tracer: you ray trace from the eye to a surface, maybe reflect the ray, and check for shadows. You then plot the pixel color to the screen. This is far from the way things happen in real life (radiosity, too, is pretty far from reality, but it's a lot closer!) In the real world, radiative flux (i.e. energy, light) leaves a light source (which has area, rather than an infinitely small point) and lands on walls. Some of that energy is absorbed (based on the reflectivity of the surface) and the rest is bounced back into the scene. Some of this energy enters your eye, and you see the surface. Some of this energy hits other walls, where some more energy is absorbed and the rest is bounced back into the scene. You keep bouncing light until you reach "convergence" (all energy is absorbed.)

    In response to Jare's questions, immediately above this post

    Heya, buddy! :)

    Yes, the red/green light (on the smaller boxes) does come from the light reflected from the walls. It's called "Color bleeding". You say you prefer mine... in reality, the left-hand image is much closer to the physical cornell box that was used in the original tests (where the rendered image was compared with the real thing.) I don't think mine is any less accurate -- I just used different surface colors and reflectivity values because I don't know the actual values. Had I known the actual values, I believe they would be so similar, you couldn't tell them apart.

    The reason you don't see as much color bleeding on the front wall, floor or ceiling is because the amount of energy being reflected from the colored walls is minimal compared to the amount of "white" energy landing on those surfaces. You see more red/green on the sides of the smaller boxes because they get less of the white light. And if you think about it, the sides of the smaller boxes ARE darker, because the amount of energy (red+green+blue) is less than that of the white surfaces.

    The edges where the boxes meet the floor is caused by lack of subdivision (in the left-hand image) and lack of lightmap resolution (in the right-hand image.) In order to eliminate these, one would need to cut a hole in the floor where the boxes sit to prevent the lightmap pixels from filtering under them.

    So (1) no, none of this is accuracy-related, unless you consider the lightmap a source of accuracy loss, and (2) no, I never said mine is better. :) My goal was never to improve upon the cornell box solution, but rather to reach it.

     
    Mads Andersen

    October 20, 2001, 08:42 AM

    Hi Paul.

    Nice job... one question though, you say that your solution "uses a completely analytical solution". Does this enclude the visibility as seen from an area (which a patch is), or only from the center of a patch?

     
    Jare

    October 20, 2001, 08:58 AM

    Hey Paul,

    thanks for the response. The lack of wall/ceiling color bleeding still looks strange to me, but I guess that's the way reality is. :) And oh my joke about "Mine is better" applied to me, not you, but now I see made a mess of the whole sentence. Darn english language! Nevermind... :)

     
    Steve Wortham

    October 20, 2001, 10:02 AM

    Thanks Midnight.

    OK cool, so it sounds like my algorithm does have some similarities to radiosity then. I am still confused what to call it. But I'll figure it out.

    Right now, I've got 3 programs with radiosity/raycasting rendered lightmaps. One of which is called "Radiosity" and has a downloadable exe. One is Terra3D(a brute force terrain engine with static lighting, and multitexturing). And one is a First Person Shooter I am working on.

    All at www.gldomain.com


    But when it comes to realism and visual quality, I think you did a very good job man. =)

     
    =[Scarab]=

    October 20, 2001, 10:18 AM

    :)

     
    Arne Rosenfeldt

    October 20, 2001, 03:24 PM

    IMHO:
    I the form factor generation is analytical.
    There is a PVS generator out there, wich uses beamtrees??
    What is the sense of 99,99999..% convergence, when other things,
    like the low lightmap-resolution or texture-color-depth are more limiting?

     
    AGPX

    October 20, 2001, 06:00 PM

    Paul, great work.
    The images looks great. However, for lightmaps generation, the most important thing in which I'm interested is the speed. On your P3 1Ghz you can do 100 iterations per second (with this scene). Doubling the polygons, how vary the visibility database processing time? That is, what's the complexity of the algorithm, approximately? And the memory usage? Often memory limit represent a severe limitation.
    In my radiosity processor, I store in every patch a list of the patches that can contribute to it (First, I use the hemicube method to construct a view frustum from the center of patch and then I do frustum culling throw a KD-Tree that contains the scene. The far plane of the frustum depends from distance and from some occlusion heuristics. Also, I have stored the FF for every patch in the lists). The convergence is quite fast (and I can't think how it can be faster: for every patch only contributing patches are examined and FF are precalculated, so I do simply few sums), but memory usage is very BIG for not too simple scenes. Cornell Box is ok, but what about for complex scene (like a game level)?

    Anyway, your method seem very interesting. It's a pity that I couldn't participate to your course! (I'm too far! I live in Italy) :(

    "At some point, I also hope to release full source to this tool to the public."

    I hope too. ;)

     
    Sheep

    October 20, 2001, 08:20 PM

    Looking good.
    Could you please explain how you are determining the form factors? I can't think of any obvious techniques that don't involve hemicube/spheres or polygon-ray intersection tests.

     
    This thread contains 47 messages.
    First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
     
     
    Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.