Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Joshua Shagam, posted on July 02, 2001




Image Description, by Joshua Shagam



I haven't done any graphics programming in way too long, so tonight I finally decided to get around to playing with an idea for hardware-accelerated radiosity algorithm I've been pondering for a while. These are two screenshots from the 6-hour hacking session, both with two passes of my algorithm applied.

The algorithm is actually quite simple. Basically, for every vertex in the scene, render a low-resolution version of the scene from its viewpoint. Take the average of all the pixels and add in the surface's emission, and there you have the light value. I played with various weighted averages to try to account for angle of incidence and such, but I actually got the best results by just taking an unweighted average of the pixels on a 128x128 image rendered with a 60-degree field of view.

The algorithm is pretty slow. Each pass took about a minute and a half for the top image (the bottom image was a lot faster, though I didn't time it). However, once the lighting is computed, it renders at about 150FPS on my machine (Matrox G400 running under Linux on a Duron 850). Of course, after the radiosity is calculated, the engine is a pure polygon-pusher, so the framerate isn't really *that* impressive. :)

A fun thing about the implementation is that you get to watch the radiosity calculations as they're going. I got pretty dizzy when it was working on the spheres though. :)

--
Joshua Shagam
joshagam@cs.nmsu.edu
www.cs.nmsu.edu/~joshagam


[prev]
Image of the Day Gallery
www.flipcode.com

[next]

 
Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Lucid

July 02, 2001, 11:33 AM

"A fun thing about the implementation is that you get to watch the radiosity calculations as they're going. I got pretty dizzy when it was working on the spheres though. :)"

That sounds cool... demo?

 
Pants

July 02, 2001, 11:38 AM

Yeah, cool. We want a demo! Or at least an AVI of it running...

 
Jeroen

July 02, 2001, 11:39 AM

Very cool result for 6 hours worth of coding!

Isn't your approach similar to the "hemi-cube" algorithm?

And do you subdivide large polygons into patches, or are the floor/walls made up of smaller polygons already?

Jeroen

 
bwalmisley

July 02, 2001, 12:42 PM

Very nice. Good to see something a bit... different. Like everybody else I would like to see a demo.

Benedict

 
Rectilinear Cat

July 02, 2001, 12:44 PM

I'm implementing something like that as well. Using GL to rennder each
side of the hemicube. Except I'm actually trying to accurately compute
the form factor (kinda confusing really.....) Not going to subdivide
any of the geometry. You're just doing vertex lighting right? None of
that lightmap stuff?

 
shrike

July 02, 2001, 02:47 PM

I have heard about that idea before, but using a light map instead. It is too bad that even with hardware acceleration it is to slow to do real time lighting. Now, that would be awesome.

 
SamS

July 02, 2001, 03:21 PM

Wow, very interesting way to do radiosity (poor man's approach). Is there any way to do the lighting calculations on the fly in real-time? I guess not if the scene has to be rendered from each vertices' perspective. Unfortunately, this means that the scene has to remain static or the calculations will have to be redone. Also note that objects that are further away should contribute much less to the illumination (because of attenuation) -- perhaps you could use the depth buffer to handle this.?

Novel work.

 
Arne Rosenfeldt

July 02, 2001, 03:27 PM

Does anyone do a "Pyramid" algorithm?
3 faces instead of 5 ??

 
David Frey

July 02, 2001, 03:53 PM

How do you determine the view direction for each vertex when rendering the viewpoint from them, vertex to eye?

 
goltrpoat

July 02, 2001, 03:54 PM

in the direction of the pseudonormal, i'd assume

 
Nick

July 02, 2001, 04:15 PM

Not really a reply to the IOTD, but might be interesting for the people that don't know much about radiosity yet:

The good-looking textured light-sourced bouncy fun smart and stretchy page: Radiosity

 
Catfish

July 02, 2001, 05:47 PM

Interesting....Would it be possible to boost the resolution & do poor man's raytracing with that? Obviously it'd need some work to handle refractions & so on, but should be possible to get the light sources, given 180 FOV.

 
fluffy

July 02, 2001, 06:38 PM

Okay, to answer some questions:

A demo probably wouldn't do many people any good, since most people here don't run Linux. :) If anyone still cares, though, I could post a binary. (I'm not willing to post source just yet, since this is still on my research engine which has a few things I'm still developing and writing papers on and such.)

The meshes are already tesselated with a lot of polygons. If I were to do this seriously, I'd do some sort of adaptive tesselation, but this was just a simple project to do while bored.

I haven't read up much on radiosity, so I'd never heard of the hemicube approach. I didn't think this was very novel anyway. :) This isn't quite a hemicube though - I'm only rendering one face. I'm also not doing any compensation for the overall area or anything; I originally was doing a weighted average, but it was much slower and didn't really look any different.

The view direction from each vertex is just that vertex's surface normal. Should be self-explanatory.

I've thought of a few ways of trying to allow for realtime modification to it (i.e. only recalculate visible vertices etc.), but that is also outside the scope of my silly hackjob. If I'm even going to work on this idea any further, I'll be working on simplified photonmapping, and not realtime lighting calculations. I mean, not everything has to be 100% on-the-fly... and having realtime raytracing-quality walkthroughs of a precomputed environment seems to be an admirable goal too. :)

 
fluffy

July 02, 2001, 06:49 PM

Oops, missed a few. I think they should all be answered by this though.

This is per-vertex. No lightmapping involved. Lightmapping doesn't really seem all that useful to me - after all, with this technique there'd only be one sample per vertex anyway, so it's just easier to do Gouraud shading instead of dealing with textures and such, especially since not every piece of geometry can be easily mapped to a 2mx2n lightmap. It's not like vertex lighting is immediately crap just because it's "old."

Raytracing and radiosity are entirely different things. In radiosity, you don't have a light source; instead you have polygons which have a high amount of emissive/ambient light. There's a middle ground (photonmapping) which this hackjob was a basic proof-of-concept for, where basically instead of having a single diffuse parameter per vertex, you have (basically) a per-vertex environmentmap. This makes it easy to handle reflection (which includes specular hilights - specular hilights are just a cheap form of pretending to be reflective) and refraction.

Oh, and I might make an MPEG or something. Under Linux I don't have any decent tools for putting out AVIs, sorry.

 
Ampere

July 02, 2001, 06:54 PM

It's too bad there aren't some flashy textures on there to draw people in. I find this IOTD one of the more interesting.

Anyway, I was wondering how low of a resolution you can render the viewpoints at and still get good results. I'd imagine that at least for most cases you could go pretty small without degrading image quality.

Also, even though I don't use linux, I wouldn't mind seeing the results other people get from the binary. I'd imagine that the constant readback is a major bottleneck, so I'd like to see if another card whose drivers are more optimized for that case can speed it up significantly.

Hmmm... on that note maybe the best approach would be to keep rendering small viewports until you've used the whole screen and then read it back all at once. Of course, maybe that's what you're doing already, though I wouldn't imagine you spent much of your 6 hours on that part of the code. :)

 
Kim Pallister

July 02, 2001, 08:00 PM

How did you do the weighted average? In Software? Might have a speedup (or maybe not, just thinking out loud here) by doing some render to texture stuff:

1 - Have a texture that corresponds in size to the tesselation of the object (e.g. 32x32 vertex grid gets a 32x32 texture)

2 - Have a 'scratch pad' of textures (you'll see why) that are 4x4, 8x8, 32x32 in size, etc, up to some maximum size. Like a mipmap chain

For every vertex (yikes. this still is going to be slow)
{
3 - render scene (from vertex point of view) into the highest res texture of your scratchpad

4 - render the 32x32 scratchpad texture using a quad onto the 16x16 scratchpad. The bilerping should blend on minification. Keep going down to the 4x4 texture

5 - render that 4x4 texture onto a quad that overlaps only a single texel on the object's unique texture (from step 1).
} //end for

K

 
DirtyPunk

July 02, 2001, 10:59 PM

This is pretty damn cool :)

If you do take this any further, discontinuity meshing would be a great way to go - you can increase the detail of your lighting significantly (for vertex interpolated stuff) while dropping your actual vertex count. Just search for discontinuity meshing on the net (Just a thought :)

I'm pretty sure you could calculate form factors per pixel too (although, maybe not on a G400).

 
MK42

July 03, 2001, 03:08 AM

This looks really nice, especially considering the simplicity involved. I don't know if you are already doing this, but for hemicubes there's a trick to get rid of some aliasing artifacts, which is pretty cheap. Instead of always using the same view frustum for a vertex, you rotate it by a random amount around the pseudo normal ... when visualising this it should make you really dizzy.


Keep up the nice work,

MK42

 
Raspberry

July 03, 2001, 05:31 AM

novel approach. I applaud at the very least your creativity! I must admit that the images even though not as nice as some, show a kind of perfect simplicity. If you were to add reflection (which i am not sure, but shouldn't be to hard as you already have the outward view available from the texture), then you might be able to make some quite nice static images? (sure reflections wouldn't be real time, but they'd be a bit more accurate than most real time systems)

 
Lucid

July 03, 2001, 06:14 AM

thankyou

 
Joakim Hårsman

July 03, 2001, 08:30 AM

nVidia cards support hardware mipmap generation under OpenGL using the SGIS_genetate_mipmaps extension. So you could just turn that on, render to the base texture, and have the card update the other mip levels automagically. This would work really well, except you get an average instead of a sum of pixel values.

 
MauMan

July 03, 2001, 08:45 AM

I run linux. Please post the binary and data if you don't mind.

Thanks!

 
ironfroggy

July 03, 2001, 11:06 AM

Very nice looking. If this was used in a scene for a game, it could looks fantastic!

Of course, I'd like to see a demo. Source would be nice too, or at least more description on how to do this.

 
fluffy

July 03, 2001, 01:34 PM

Yay, more stuff to reply to. :)

Ampere: I originally tried 256x256, then I went down to 128x128 and didn't notice any difference in quality (but it was about 4x as fast, obviously). It could probably realistically go down to 32x32 or even lower and still look pretty good. You're right about the readback being a bottleneck. On my brother's GeForce 2, it runs twice as fast even though he has a slower CPU than me (though that could also be because of the hardware transforms, though I doubt it).

Kim: Good idea. Unfortunately, unextended OpenGL under Linux doesn't AFAIK have the capability for that, and I don't like playing with extensions since that makes my code harder to run on other systems. Also, it'd need hardware mipmap generation to work without slowing things down majorly. It's a lot easier to just average in software. :)

Raspberry: Radiosity doesn't handle reflection at all (it only does diffuse interreflections). The next thing I was going to hack was going to be photonmapping, which handles reflections very well, though I'd have to tesselate my geometry up a LOT in order to get decent results with it (it does non-perfect dull reflections very well though).

ironfroggy: Okay, okay, I'll post a demo. :) I think my IOTD description gave all the detail anyone would ever need, though - for each vertex in the scene, transform the camera to the vertex and render the scene from its viewpoint to make the new diffuse color parameter for that vertex. Not exactly brain surgery.

Anyway, I'll try putting up a demo in a bit. Film at eleven, or something. :)

 
Max

July 03, 2001, 01:39 PM

ironfroggy: If you need more information on techniques like this, I believe Hugo Elias wrote an article which uses a similar method for calculating radiosity.

Max

 
fluffy

July 03, 2001, 02:45 PM

Okay, fine, here's a demo. :) http://www.cs.nmsu.edu/~joshagam/temp/radiosity-demo.tar.gz

No support or anything. It runs on my machine, and that's all I cared about. :) I included a bunch of example files and some meshes and stuff. Have fun.

 
Rectilinear Cat

July 03, 2001, 03:44 PM

waahh....me running on crappy sparc machine. cant run your prog :(

 
fluffy

July 03, 2001, 04:28 PM

cat: Sorry. I'll have to get a crappy old Sparc to compile it on.

 
altoids

July 04, 2001, 03:39 AM

Radiosity: A Programmer's Perspective, by Ashdown (www.helios32.com) covers such an algorithm (it uses a tetrahedral shape for form factor calculations).

 
Raspberry

July 05, 2001, 04:16 AM

ah, but you are not doing radiosity, you said you were doing light sourcing by viewing away from each pixel on the screen? if you have a view from each pixel, then sureley you can determie which would be the view point that would be reflected back to the camera? Or am i imitating the sound of a dog at the bottom of the incorrect large plant lifeform?

 
This thread contains 30 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.