Not logged in, Join Here! or Log In Below:  
News Articles Search    

Submitted by Hector Yee, posted on June 15, 2001

Image Description, by Hector Yee

It's output from a montecarlo path tracer I wrote. It demonstrates motion blur (flamingo), caustics (the reflection of the gold ashtray on the wall), color bleeding (green of the leaves on the petals), soft shadows and area light sources.

The lighting is physically accurate global illumination. It uses irradiance caching for indirect illumination computation, augmented by a model of the human visual system that takes into account visual importance as well as the sensitivity of the human visual system to spatial frequencies and movement.

More on:

Image of the Day Gallery


Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.

June 15, 2001, 12:52 PM

Wow ...and i suppose it runs real time ..let's say... 60fps

Your tracer seems very complex. Taking sucha things in to accout, almost Renderman.

Good work.

On what kind of project do you work ? Tracer for the cinema ?

The Wolf

June 15, 2001, 12:58 PM

This is some serious stuff, very nice.
i love the textures on the walls and stuff, are they procedural?


June 15, 2001, 12:58 PM

uh, i hope you're joking... 60fps?
you'd have to have some pretty powerfull machine to do photonmapping at 60fps hehehe ;)

looks great, nice :)


June 15, 2001, 01:24 PM

No, none of the textures are procedual.


June 15, 2001, 01:44 PM

Love the image. That is some good stuff right there :)

Hector Yee

June 15, 2001, 01:46 PM

Hardly 60fps, maybe in 80 years or so. I rendered the entire 30 second video for a week on 128 800 Mhz Pentium IIIs. (I rendered the scene a couple times different ways to get good timing estimates for different techniques).

Performance timings on:

The initial irradiance cache filling takes a couple hours on a quad processor depending on lighting complexity. Subsequent frames take maybe 2-10 minutes on a quad processor machine. For a single image it's ok to have fewer samples but when you animate them, you start seeing all sorts of artifacts like light bleeding and animated noise if you don't integrate enough.

I think I'll try density estimation or photon maps next. I believe irradiance caching has better control over the error though.


June 15, 2001, 02:25 PM

That is really nice work!


June 15, 2001, 02:43 PM

I watched the video. Definitely some cool work there. I'm wondering why you didn't apply motion-blur to the moving objects, though... would it have turned your 1-week rendering time into 1-year? :)

I also notice that your non-procedural textures have no filtering on them (as seen in the third photo-gallery picture). Is there any reason for that? I can understand not wanting to do bilinear filtering (looks blurry), but why not bicubic (which would look really good for those marble textures)? Or, failing that, why not do your wall textures procedrually, or at least use a higher-resolution image?

How hard would it be to add in depth-of-field? It doesn't seem like it'd be too hard to do it properly (for example, you could simply build a 'camera' of a lens and a piece of film and then render the piece of film :) but you could also always just use a similar methodology to motion-blur.

I love your illumination model, BTW. It's very convincing.

David Olsson

June 15, 2001, 02:49 PM

Very very nice.

Just two complaints about the video. I think you need more motionblur when the camera moves fast since then you can see how that scene is sampled in descrete intervals. Also, the marble texture looks a bit blocky when the camera is upclose. I realize that this might just be artifacts from the mpeg encoding but...

Otherwise very impressive work. Makes me wanna go back to raytracing again.

Jari Komppa

June 15, 2001, 04:32 PM

This is one of those 'couldn't resist' kinds of things.

30 second animation in a week on 128 machines.
That's, assuming 25 frames per second, 750 frames in 604800 seconds, or (approx) 0,00124 frames per second.

Now then, we want 60 frames per second on ONE machine, so we need (approx) 6193152 times the processing power.

Now, if Moore's law holds ('cpu power doubles every 18 months')..

In 80 years the cpu power should double more than 53 times. 2 to the power of 53 is 9007199254740992.

However, 2 to the power of 23 is 8388608, which would mean that cpu power meets and exceeds the requirement in about 34,5 years.

(Okay, this whole calculation is based on huge assumptions and stuff, but hey, I was bored - naturally there may be special raytracing accelerators or something similar in the next couple decades too..).

Hector Yee

June 15, 2001, 04:50 PM

Yup, motion blurring the video would have taken way more time than I wanted. Also, about texture filtering, there was none =P. At that time I didn't think much about pre-filtering it. In our SIGGRAPH 2000 paper on time course of adaptation (Pattanaik et al.) I had to remove the road texture and replace it with a dark gray lambertian surface because of that problem; the video would crawl. The rendering itself is anti-aliased but the textures weren't pre-filtered. Good observation!


June 15, 2001, 05:11 PM

There is anti-aliasing everywhere except for that white spot behind the blurred flamingo. Why is that?
This is the best looking iotd I've seen for some time.



June 15, 2001, 05:14 PM

can be seen here as well.


Hector Yee

June 15, 2001, 05:34 PM

The white spot is the area light source. Since the image is high dynamic range (each pixel is a float of intensities at multiple wavelengths) anything beyond (1,1,1) is clipped to white since I didn't do any tone mapping. What you see isn't aliasing but dynamic range clipping.


June 15, 2001, 06:19 PM

Awesome lighting model.. perfect use of radiosity in raytracing.

I was very impressed by the MPEG. It would be even better if the scene was darker so that specific lighting features would be more apparent. Also, I don't think I saw any sort of a motion blur anywhere ..? Can it handle transparency levels?

Cool stuff.


June 15, 2001, 06:37 PM

"Awesome lighting model.. perfect use of radiosity in raytracing."

He does not use radiosity; he uses monto carlo methods for evaluating Kajiya's rendering equation which is a second order Fredholm integral. Both approaches attempt to solve the problem of global illumination. Just wanted to make that clear.


June 15, 2001, 07:06 PM

Montecarlo methods are some sort of stocastic sampling aren't they? Seriously nice images Mr Yee, colour me impressed.


June 15, 2001, 07:26 PM

"Montecarlo methods are some sort of stocastic sampling aren't they? Seriously nice images Mr Yee, colour me impressed."

Indeed. Monte Carlo methods provide a probablistic and efficient approach to evaluating multi-dimensional integrals. You can use this to evaluate the complete rendering equation described in [1]. When this is combined with ray tracing it is normally referred to as path tracing. I think both "Computer Graphics: Principles & Practice" by Foley, van Dam, et al. as well as "Advanced Animation and Rendering Techniques" by Watt and Watt discuss the subject of global illumination in general and path tracing specifically.
[1] J. Kajiya. The Rendering Equation. Proc. of SIGGRAPH '86 (Dallas,TX, Aug. 18-22). Computer Graphics, 20(4):143-150, Aug. 1986.


June 15, 2001, 08:01 PM



June 15, 2001, 09:58 PM

Looks nice, but I have three reasons to criticize it:

1) normal people usually dont like blur because it 'kills' details and u see some spots instead of 'bird made from paper'.
2) same as above
3) same as above

I would say that blur makes your picture bad. Maybe its technically impressive, but in fact it KILLS the beauty :)

If you could remove all things which is most definitely made from recycled (?) paper, your picture could look nice.

Lion V

June 16, 2001, 12:20 AM

blawdy good, doncha know !
Man...thats some serious rendering time there....
Keep it up !


June 16, 2001, 05:26 AM

What psykotic should've said was - "Its like a raytracer, but like randomized man". Much simpler.


June 16, 2001, 09:04 AM

I am always doing stuff like that, it is fun and surprising. It is not that long before we reach 1 Tera Hz cpu in a desk top pc- Wahooo!.


June 16, 2001, 11:28 AM

well, that really depends on several bottlenecks in how fast certain parts of the processor can go, and how fast it is even physically possible to get a processor to go. Then there's the timeframe - after all, we've only just reached 1 Ghz, and it took a while to get there from 1 Mhz. I think the development of non-deterministic computers to run our non-deterministic algorithms on is a far more interesting prospect. Goodbye search, which would certainly be faster than any increase in the speed of a processor given a large enough problem :)

Now, if only I could figure out why people feel the need to say such silly things...


June 16, 2001, 01:22 PM

He has plenty of shots on the website without the blur. It's just a photorealistic effect to go with the rest of the PR stuff built into the engine. If the bird was moving really fast when someone took the picture with a longer shutter speed it would look just like that.

Hiro Protagonist

June 16, 2001, 04:37 PM

Very nice image, I havent downloaded the animation yet. I have a couple of questions, I am stunned by the amount of processor power that it took to render that image, and I am wondering, where do you work/go to school that you have access to 128 high-end machines for a week to do a render? Also, I used to work in the commercial industry, we would make 30 second animated renders nearly every day for television commercials, usually using a Dual Proc Mac G4 with 1GB ram. It generally took a couple of hours for the same results (not including motion blur) and I was wondering if the motion blur is THAT computationally expensive.
Anyway, very well done. Keep posting as you go.

Hector Yee

June 16, 2001, 09:52 PM

Cornell has a few compute clusters donated by Intel and the AC3 Velocity Supercomputer. It's computationally expensive because it's doing a lot more than a regular ray tracer. In order to get color bleeding you have to shoot rays in all directions once you hit a diffuse surface. A regular ray tracer will shoot tens of rays to get anti-aliasing. I had to shoot 8000 for the first bounce, 4000 for the second bounce etc, per pixel ... you can see the combinatorial explosion there. The irradiance caching helps but for the integral to converge to the correct solution the only way is to shoot lots of samples. Also, sometimes soft shadows are fudged with blurred shadow maps. In this rendering, no short cuts were taken for soft shadows, caustics and other effects. The only shortcut taken was to interpolate the irradiance and even that is controlled by an error tolerance measure. Furthermore given enough time, and with the correct BRDF & geometry measurements, a Montecarlo renderer will converge to an image that can be physically compared to a real world image. i.e. the illuminance values will be very close to real world values. It is an unbiased estimate of what the illumination would be given the lighting conditions. A normal ray tracer cannot guaruntee that. Hope this explains why global illumination sucks up so much computer time.


June 17, 2001, 05:51 AM

I'd be interested to see how Cinema4dXL(7) compares to this in terms of render quality and time - I'd suspect that you could get another close estimate with a lot less work, and of similar visual quality.

Still, it's very nice work.

Arne Rosenfeldt

June 17, 2001, 10:28 AM

How fine is your spectrum ( poor RGB or 20colors of the rainbow)
(Maybe you only have RGB-materials, no fluorescenz)
Do you consider polarisation (fresnelcoefficients for reflection at glas)?

I guess diffraction and interference are not important in this scale

(...second harmonic generation, speed of light for motion...)

Hector Yee

June 17, 2001, 01:05 PM

The renderer supports an arbitrary number of wavelength samples. However, since this scene was modeled in 3dsmax, the BRDF was limited to the usual 3 wavelengths, RGB.

This thread contains 44 messages.
First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.