Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Devon Zachary, posted on May 30, 2001




Image Description, by Devon Zachary



I started work on this raytracer project a while ago, but just recently purchased 'Computer Graphics -Principles and Practice' - stonking good book (if not slightly overwhelming!!) But I had heard that raytracers were easy to program and looked nice, so I dove in. The book had the intersection formulas for a couple primitives (plane+sphere+polygon) and so I coded those in.

Lighting is just basic phong lighting (N dot D + specular) and in the big purple picture you can see and example of a bug I was experiencing, in which the negative values (Resulting from the phong equation) were creating a very bright bump in the centre, which was odd, because I only had two lights!! But I solved the problem, and the picture looked good, so I saved it. Lights can have linear, quadratic, and constant attenuation...(although wrong values of linear+quadratic attenuation cause wierd colors alongside the border of the attenuation area)

Shadows are calculated using normal raytracing prodcedures (no shadow volumes or maps, although I would like to use shadow maps for softer shadows) The only interesting feature is a 'soft shadow sphere' feature, which creates a group of lights with low intensities to fake area lights.

There is reflection- no refraction. (need more time to wrap my head around it!) And support for basic surfaces, using perlin noise (which I am currently doing very wrong... the noise is not band limited!) as well as a checker function.

Looking at the POV- source has been very helpful, at least in terms of structuring the program....

Output is just to RAW files, it also outputs a depth map channel, based on T from the ray equation, I'm not sure if this is more, or less accurate for making stereograms then basic Z-buffers. (can someone who knows tell me?)

Anyway, if you have any questions, just ask!


[prev]
Image of the Day Gallery
www.flipcode.com

[next]

 
Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Altair

May 31, 2001, 12:31 PM

David: I think many people agree with me, that's what this dubble meaning of backward comes from

Yeah, but world is full of idiots, khehe

Cheers, Altair

 
L.e.Denninger

May 31, 2001, 12:42 PM

Altair - as long as most people in the world are stupid, the most convenient thing is to do everything the stupid way, 'cause in that case most people will understand what you're doing :)

 
David Olsson

May 31, 2001, 12:43 PM

Bah, lame ! :)

 
Altair

May 31, 2001, 12:49 PM

I don't really feel ankward to make things in the right way as long as stupid people doesn't need to have a clue what I'm doing d:

Cheers, Altair

 
lycium

May 31, 2001, 01:32 PM

the fastest acceleration scheme for realtime raytracing would be axisaligned bsps. even though they are the fastest accel schemes, the overhead is still way too big for the number of objects you'll be using for realtime, so the fastest accel scheme is: NO accel scheme!

this will obviously change as RTRT scenes become more complicated.

spend time doing first hit optimisations, shadow caches, and here's the big one: DON'T LET OPENGL DRAW YOUR PIXELS!! :) use a simple ptc or directdraw console to do this, opengl buffers the intrusctions, tries to optimise them, etc.. (at least on nv cards) so you get MASSIVE overhead and hd swapping, etc.

try www.pouet.net for "fresnel", "fresnel 2", "heaven 7", "nature suxx" and "nature stil suxx" for some examples. they all implement subsampling, you need that for RTRT at the moment.

 
ector

May 31, 2001, 01:34 PM

If you use MSVC, you have a profiler.
Just check the profiling checkbox somewhere in Project Settings (don't remember) and then Build->Profile i think.

 
lycium

May 31, 2001, 01:37 PM

polygon renderers are slowly approaching what can be done with ray tracing, and as the pixel / polygon ratio decreases, it becomes increasingly attractive to use ray tracing. i'm confident that ray tracing will one day replace polygon based rendering, we just need nvidia to support it! :) just some simple quadratic / cubic / quartic solvers in hardware would be cool for a start...

also, you get full clipping, 0% overdraw, shadows, bumpmapping, etc... all basically for free.

 
Bramz

May 31, 2001, 01:39 PM

Bwah, define the distance between 2 points P and Q to be dist(P, Q) = abs(p1 - q1) + abs(p2 - q2) + abs(p3 - q3) (which is much easier than your euclidian distanc) and you sphere looks like a cube now ...

Which one is more perfect? I would go for the cube, since the far more easier definition of distance :)

Bramz

 
Bramz

May 31, 2001, 01:39 PM

i think that's called the "manhattan distance", but i'm not sure

 
Willem

May 31, 2001, 01:53 PM

Yup that's Manhattan distance (or more mathematically [ie, linear algebra], the L1 norm). Eucledian distance is the L2 norm.

 
Kanda

May 31, 2001, 02:59 PM

Wanna see kewl real time raytracing ?
go to www.pouet.net and download
the 2 demos from Fan:

Nature Suxx

http://www.pouet.net/prod.php3?which=467
Direct dowload:
ftp://ftp.scene.org/pub/parties/2000/mekkasymposium00/demo/naturesuxx.zip

Nature Still Suxx

http://www.pouet.net/prod.php3?which=1924
Direct dowload:
ftp://ftp.takeover.nl/pub/MEKKASYMPOSIUM01/demo/naturestillsuxx_fan.rar

Really cool and impressive stuffs !!!

 
Bramz

May 31, 2001, 03:23 PM

oh yeah, how could i have forgotten the L1, L2 and L37 norm? :)

I'm trying to use this for a fast ray/cube test. But I've found that wether there is or there's not an intersection, only depended on the ray's direction. Not on the origin.

Mmmh, weird.
Must have overlooked something.

Bramz

 
Bramz

May 31, 2001, 03:42 PM

refraction simply goes like this:

a ray (with direction vector R) hits a surface (with local normal vector N). We define the subspace where the ray comes from as the outerspace, and the other side of the surface innerside.

Split up the vector R in 2 components: one orthogonal and one parallel to the surface:
R1 is the orthogonal one and is given by R1 = (R dot N) * N
R2 is the parallel one and can be simply calculated by R2 = R - R1

now, you can consider the transmitted ray with vector T. Like R you can split up T in 2 components T1 and T2. This components will be calculated with Snellius.

Snellius says that the parallel components stays untouched so that
T2 = R2

He also says that, if N_out is the refractin index of outerspace and N_in is the refraction index of innerspace, then T1 = R1 * N_out / N_in

and then obviously T = T1 + T2, but don't forget to normalize afterwards. T won't be a unitvector!

I hope this helps
Bramz

 
DarkReaper

May 31, 2001, 04:23 PM

Ok, so the cube is perfect also, but which looks better? The sphere is definately cooler than the cube. =)

 
Punchey

May 31, 2001, 05:32 PM

I saw another one where they ran hundreds (maybe thousands) of facial metrics of men's and women's faces through a computer (both ugly and attractive) and averaged them out. They concluded that humans tend towards people who have the more "average" features. That is to say, average among all the possible facial metrics. So the next time someone says you're merely average looking, take that as a HUGE compliment!

BTW, isn't it odd that borgs would have so much asymmetry and yet they're the lifeform that you'd most expect to desire the greatest deal of symmetrical order.

 
David Olsson

May 31, 2001, 05:34 PM

Yes you must have....

 
DirtyPunk

June 01, 2001, 03:54 AM

Soft shadows don't REQUIRE ray jitter, they just look better with jitter. Proper soft shadows come from area light sources and you can use straight tracing to get a number of shadow "samples".

 
DirtyPunk

June 01, 2001, 03:56 AM

I think in the end it is just easier to say "from eye" and "from light".

 
RezDZ

June 02, 2001, 02:45 PM

Which is what CGPAP says.

'We call it Ray-Tracing-From-The-Light-Sources to avoid confusion'

It also explains however, that technically, raytracing, like we're dong here, is *backward raytracing* because the eye is not a light emitter! The lights+objects are. *forward raytracing* would be tracing the path of the light back to the eye.

At least according to my book.

 
This thread contains 79 messages.
First Previous ( To view more messages, select a page: 0 1 2 ... out of 2) Next Last
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.