Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Devon Zachary, posted on May 30, 2001




Image Description, by Devon Zachary



I started work on this raytracer project a while ago, but just recently purchased 'Computer Graphics -Principles and Practice' - stonking good book (if not slightly overwhelming!!) But I had heard that raytracers were easy to program and looked nice, so I dove in. The book had the intersection formulas for a couple primitives (plane+sphere+polygon) and so I coded those in.

Lighting is just basic phong lighting (N dot D + specular) and in the big purple picture you can see and example of a bug I was experiencing, in which the negative values (Resulting from the phong equation) were creating a very bright bump in the centre, which was odd, because I only had two lights!! But I solved the problem, and the picture looked good, so I saved it. Lights can have linear, quadratic, and constant attenuation...(although wrong values of linear+quadratic attenuation cause wierd colors alongside the border of the attenuation area)

Shadows are calculated using normal raytracing prodcedures (no shadow volumes or maps, although I would like to use shadow maps for softer shadows) The only interesting feature is a 'soft shadow sphere' feature, which creates a group of lights with low intensities to fake area lights.

There is reflection- no refraction. (need more time to wrap my head around it!) And support for basic surfaces, using perlin noise (which I am currently doing very wrong... the noise is not band limited!) as well as a checker function.

Looking at the POV- source has been very helpful, at least in terms of structuring the program....

Output is just to RAW files, it also outputs a depth map channel, based on T from the ray equation, I'm not sure if this is more, or less accurate for making stereograms then basic Z-buffers. (can someone who knows tell me?)

Anyway, if you have any questions, just ask!


[prev]
Image of the Day Gallery
www.flipcode.com

[next]

 
Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
L.e.Denninger

May 31, 2001, 04:40 AM

So all you have to do is minimize the amount of intersection-calculations by using a smart spatial-hierarchy in one form or another.

 
D.P.

May 31, 2001, 05:04 AM

How many years can we expect to wiat before real-time rateacing is standard for computer games? With a proper ray tracing engine most special effects should be thrown in for free such as shadows, advanced lighting, reflections, refraction, caustic effects ...

 
Dom Penfold

May 31, 2001, 06:35 AM

If anyone's interested in realtime raytracing they should download heaven7. It's got some really nice stuff in it.

http://www.scene.org/file.php?file=/parties/2000/mekkasymposium00/in64/h7-final.zip&dummy=1

Quite some years ago there was a demo called MFX_TGR2, which raytraced a few hundred pixels per frame but used a circular colour block to draw each pixel. It gave these fuzzy images but looked really cool. I can't find a link to it, but if anyone knows where to find it I'd be interested to know. :)

Dom

 
L.e.Denninger

May 31, 2001, 06:44 AM

That was Transgression II by MFX, looked really cool at the time.
try www.scene.org for this and more realtime-raytracind demos / intros

 
Arne Rosenfeldt

May 31, 2001, 06:59 AM

I think its hard to do this with backwards raytracing
(from eye to light)

 
Willem

May 31, 2001, 07:55 AM

Intersection calculations is performance killer #1 for *every* ray-tracer. That's why you want to prevent needless int. calc's from happening. Briefly, two ways of reducing =

1. Construct BV's for all objects in your scene
2. Construct a spatial subdivision of the scene with the BV's.

There are a lot more optimizations possible. In the end it turns out that to get the best performance means tweaking the layouts of the scene data-structure *per* scene. I believe best-case running time for ray-tracing a scene has been set on O(log n) with n the number of objects in your scene. That did require O(N^4) storage requirements IIRC. Not very feasible ;)

 
Willem

May 31, 2001, 08:01 AM

Have a look at RTNews if you want to find out more. For a spatial subdivision scheme, you want it to give the right intersection with:

1. The minimum amount of voxels traversed
2. Spatial subdiv. structure must have a object/voxel ratio with a small standard deviation (or else you will obviously get hiccoughs in your rendering), which means that uniform grid - while very fast to traverse - are not a good candidate. Not to mention they take up a lot of memory.
3. Memory requirements low
4. Voxel traversal running-time low (this depends on the particular subdiv. data scheme you use, and the particular scene you're rendering).
5. Early-out intersection rejection. This is aided by the spatial subdiv. scheme + voxel traversal (essentially front-to-back), and the layout of your intersection routine (ie, can it early out?).

 
Willem

May 31, 2001, 08:06 AM

Intersection calculations is performance killer #1 for *every* ray-tracer. That's why you want to prevent needless int. calc's from happening. Briefly, two ways of reducing =

1. Construct BV's for all objects in your scene
2. Construct a spatial subdivision of the scene with the BV's.

There are a lot more optimizations possible. In the end it turns out that to get the best performance means tweaking the layouts of the scene data-structure *per* scene. I believe best-case running time for ray-tracing a scene has been set on O(log n) with n the number of objects in your scene. That did require O(N^4) storage requirements IIRC. Not very feasible ;)


 
Willem

May 31, 2001, 08:07 AM

Have a look at RTNews if you want to find out more. For a spatial subdivision scheme, you want it to give the right intersection with:

1. The minimum amount of voxels traversed
2. Spatial subdiv. structure must have an object/voxel ratio with a small standard deviation (or else you will obviously get hiccoughs in your rendering), which means that uniform grid - while very fast to traverse - are not a good candidate. Not to mention they take up a lot of memory.
3. Memory requirements low
4. Voxel traversal running-time low (this depends on the particular subdiv. data scheme you use, and the particular scene you're rendering).
5. Early-out intersection rejection. This is aided by the spatial subdiv. scheme + voxel traversal (essentially front-to-back), and the layout of your intersection routine (ie, can it early out?).

 
Serapth

May 31, 2001, 09:03 AM

Correct me if im wrong here, but arent you describing ray casting? I thought in ray tracing, the light is what generated the ray, and it was traced to a finite point, or when it hit the camera, whichever came first. Thus, 8 lights in the scene, 8 ray generators.

 
L.e.Denninger

May 31, 2001, 09:21 AM

ray-tracing doesn't specify wether you're tracing the ray as seen from the eye or as submitted from the lightsource.

To confuse people even more, both method's are being called 'forward-tracing' and 'backward-tracing' - which means there are people
using the same terms, meaning exactly the opposite :-)

 
Jonas Collberg

May 31, 2001, 09:44 AM

What you're describing is called forward ray tracing, which works by tracing rays of light from the light source. This is as opposed to backward ray tracing which follows the rays that are guaranteed to hit the eye in the opposite direction, from the eye, through some pixel, bouncing of objects in the scene towards a light source. This way, you avoid doing calculations for a lot of unimportant rays that end up making no contribution to the final image (these are those that bounce of towards "black space" and thereby never reach the eye).

 
DirtyPunk

May 31, 2001, 10:11 AM

No, what you should have said was the NUMBER of intersection calculations is the biggest killer :)

 
EGreg

May 31, 2001, 10:26 AM

Jaap --- people have been thinking that spheres are the most perfect shapes for centuries.

Of course, now we know better.

One could say a very good looking shape is that of a sexy kitten like Kirsten Dunst.

mm.......

-Greg

 
Willem

May 31, 2001, 10:27 AM

Yeah, that's what I meant.

 
David Olsson

May 31, 2001, 10:29 AM

Yeah I know, it's really weird.

I just call both raytracing... I mean, both are obviously raytracing.

 
Willem

May 31, 2001, 10:30 AM

I don't see why we should use ray-tracing anyway. I mean, take Renderman. It can generate some kick f*cking ass images, but it doesn't use ray-tracing at all. I believe we will head towards general shader-tree style languages (DX8 pixel/vertex shaders will eventually converge to this kind of stuff). I believe that's the way we're going. Ray-tracing isn't actually necessary to create (near)photo-realism images.

 
EGreg

May 31, 2001, 10:34 AM

Oh by the way, about lighting in raytracers:

once you hit a non-reflective surface you just check if it's lit or not by tracing rays to all light sources and seeing how many of them actually make it there. Soft shadows require some ray jitter.

Two more things, though....

1) I'm nto exactly sure how to do specular highlights with raytracing :)
2) How do you have sex with spheres? Ah, forget it...

-Greg

 
Altair

May 31, 2001, 10:36 AM

I have never seen anyone referring forward as backward raytracing or vice versa, but could be that I haven't put too much attention to it. Anyway, it should be quite obvious that when you trace a ray starting from the eye, that's called backward raytracing, because you trace light rays your eye receives to the opposite direction. But of course you may have different opinion on this one if you are Superman.

Cheers, Altair

 
EGreg

May 31, 2001, 10:37 AM

Here are some suggestions for optimizing ray tracing:

1) Use a spatial subdivision scheme such as an octree. (BSP trees are of course out of the question, :-)
2) If you are using some simple mathematically defined figures, check out Theory and Practice issues 1 and 2 on collision detection. They describe how to figure out using piolynomial equations when rays hit objects.
3) Give me money.

-Greg

 
Parashar Krishnamachari

May 31, 2001, 10:47 AM

MegaPOV can do caustics... but as you expected, it doesn't do it by backwards raytracing. It uses Photon mapping especially for caustics.

 
Parashar Krishnamachari

May 31, 2001, 10:51 AM

Hehe... as it so happens, one of the guys in my SIGGraph group is doing a research project on Programmable Shading languages in hardware. Supposedly, he's already burned a few functions into an FPGA. You should look him up -- http://www.uiuc.edu/cgi-bin/ph

Look up daschmid...

 
morgan

May 31, 2001, 11:10 AM

A lot of exceedingly attractive people are attractive because their faces *aren't* symmetrical, however. Cindy Crawford (mole), Liv Tyler (crooked smile), Lyle Lovett (I don't find him that attractive, but whatever), Jeri Ryan w/ asymettrical borg implant, are some examples.

-m

 
David Olsson

May 31, 2001, 11:18 AM

A ray is not a physical object it's just an abstract thingy.
So why wouldn't it be just as correct to say that forward raytracing is when you trace a ray from the eye to a light.
Where tracing rays here not real light.
I don't look at raytracing as a modell of reality, just a beautiful and simple abstract object.
But it's just a matter of taste, really.

I just read a paper on raytracing from some researcher a week ago, and he was referring backward raytracing to when you traces rays from the lightsource to the eye.

 
Altair

May 31, 2001, 11:22 AM

Your concepts are screwed, dude (=

Cheers, Altair

 
David Olsson

May 31, 2001, 11:50 AM

I suspected you'd say something like that.

May I ask why ?

 
Altair

May 31, 2001, 12:13 PM

It doesn't matter what element light emiths, rays, photons or tennis balls as long as light works as the sender and eye as receiver. If you go other way around, you go to opposite direction of the track they go and trace the route of the emitted elements backwards to their source.

Cheers, Altair

 
David Olsson

May 31, 2001, 12:22 PM

I just explained that I don't think in terms of photons or what ever.
I shoot a mathematical ray from the eye and try to hit a object called lightsource.
I think many people agree with me, that's what this dubble meaning of backward comes from.

Anyway, Im not going to start a flame war on this, way to tired for that. However, it's quite interesting to see how different people look at things.

 
L.e.Denninger

May 31, 2001, 12:23 PM

It all depends :)

if you follow the light-ray to the eye you could call it forward-tracing as you are following the ray in the direction it is heading.

If you trace a ray from your eye through the projection-plane you could call it forward-tracing too, as you are actually *casting* a ray from the eye, thus following that ray in the direction it's heading.

Likewise, both could also be called backward-tracing :)
(If you call the first case 'forward', then the second case is 'backward' and vice versa)

Confusing ?

:)

 
EGreg

May 31, 2001, 12:30 PM

Tracing rays forward or backward through time, think of it that way.

-Greg

 
This thread contains 79 messages.
First Previous ( To view more messages, select a page: 0 1 2 ... out of 2) Next Last
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.