Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Lightmapping a triangulated curved surface Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
pauljan

April 09, 2005, 08:53 AM

Hi fellow flipcoders,

My (ray-tracing) lightmapper basically uses two simple steps to determine wether a lightmap texel is lit or not

1. Is the face facing the light (test normals)?

2. Cast a ray from the current location to the light: if it hits anything on the way (triangle test + octree for speedup) the light has no effect.

The input I get is basically polygon soup, so for step 1 I used to work with face normals. Which works well with angular shapes like cubes and pyramids, but results in the picture below for a sphere (or similar curved surfaces, I'll use a sphere as an example):

http://www.delgine.com/images/misc/sphereHardNormals.jpg

So now I recently introduced normal smoothing (edge preserving, based on a user provided crease angle). When I turn off step 2, this leads to a very nice and smooth result from step 1. However, because the ray-triangle test still tests against the actual faces (or at least, I am guessing that is the reason), if I turn on step 2 the sphere looks like this:

http://www.delgine.com/images/misc/sphereSmoothNormals.jpg

So my question is, how do I solve this? Should I account for the smoothed normal in Step 2 as well? But how do I calculate a Point of Intersection accounting for a smoothed normal? Iteratively? (i.e. calculate PI with face normal, calculate smooth normal using PI, recalculate PI with smooth normal, if fails it misses the triangle?) That sounds horribly expensive, but maybe I'm wrong.

I am guessing this is a common problem (unless I am doing something fundamentally wrong and everyone else took a lot smarter aproach to lightmapping), so I really hope someone can enlighten me here (excuse the pun).

Thanks in advance for any insights!

 
Reedbeta

April 09, 2005, 02:06 PM

You might want to ignore intersections when the ray hits the same object it started from, within a very short distance from the ray origin. That should fix up your silhouettes; you will get some false negatives on the backside of the sphere, but dotting the smoothed normal with the light vector will take care of these.

 
pauljan

April 10, 2005, 06:43 AM

As the size of the sphere increases (and the amount of polygons used for the approximation stays the same), that epsilon would need to become really big (the minimum distance required to be safe is something like 1/3 of the size of a face I guess, haven't tried calculating it yet).

If I had deterministic object-information, a solution based on this principle might be feasible, but as it is I just have polygon soup (for example, users might have merged all objects in the scene into one really big object, so the object information is of no real use here). I really want the lightmapper to work in situations where face sizes vary quite a lot.

Thanks for the suggestion though, it is very much appreciated!

 
Ono-Sendai

April 11, 2005, 08:12 AM

This seems to be a fundamental problem that stems from the mismatch between 'geometric' normals and 'smoothed' normals. Geometric normals are those perpendicular to the triangle surface. Smoothed normals are those interpolated from vertex normals. You can see there is a problem by considering a point on a tri near a vertex - the smoothed normal at this point will be similar to the vertex normal, which is not perpendicular to the triangle face! As a result you tend to get graphical mismatches between any two algorithms that use the two different types of normal.
The same effect is responsible for the jaggy edges around stencil shadow volmes at the seam where they meet a smooth casting object.

The only truly elegant solution i can think of is to abandon smoothed normals for some kind of actually smoothed surface. Something like recursive triangle subdivision with smoothing could do the trick, or bezier patches etc..

 
pauljan

April 11, 2005, 09:43 AM

Thank you for this additional insight!

However, commercial lightmappers (like Gile[s]) don't seem to suffer from this problem, and I don't think they took the road (ad-hoc subdivision or patche) you describe above. I think now is the time for me to just to a little experimenting, if I find a solution I'll post it here (if anyone is interested, that is).

 
Rui Martins

April 11, 2005, 10:23 AM

Usually when you define a supposedly smooth surface, you define a normal for every vertex, instead of a normal for every poly. Usually these meshes are a descretization (triangularization) of a mathematical smooth surface. Off course you can approximate the real (mathematical) surface normal by averaging the triangles normals, but usually the real vertex normal should be given.

Finally, any decent rendering of such a mesh should NOT interpolate the normals linearly! Use cubic interpolation instead.

 
pauljan

April 11, 2005, 02:45 PM

Rui, as it is I don't have vertex normals defined (the editor doesn't support them yet) so I calculate them using the algorithm mentioned above, but wether or not I have vertex normals available does not affect my actual problem. Which is: how to do 'smooth' intersection tests with nearby faces to avoid "false" shadow-casting. Where false is defined as 'shadows that would not have been cast if the shape had exactly matched the one suggested by the smoothed normals'.

Or at least, that is what I think my problem is :)

The tip about cubic interpolation of my normals is very much appreciated, but what makes you think I am not already doing so?

 
Victor Widell

April 11, 2005, 08:24 PM

Move the intersection point outwards, along the normal of the triangle, until the new shadow ray will just barely miss the edges of the shadowed triangle. This will introduce some shadow "stretching" artifacts in extreme cases of low tesselation, but it should not be noticeable in most cases.

 
Rui Martins

April 12, 2005, 05:16 AM

... The tip about cubic interpolation of my normals is very much appreciated, but what makes you think I am not already doing so?


Looking at your second picture, the frontier between light and shadow always seems to be linear, i.e. a linear segment across each triangle.

Also, I don't believe to be feasible (without a lot of strange tricks) to produce a smooth shadow that will NOT follow the geometry, but instead a virtually smoothed shape (geometry). Mainly because I'm assuming you are interpolating the edges which is not enough, you will have to interpolate on each pixel, and even so I'm not sure you will get what you want.

An easier solution is to increase the tesselation of the surface.

 
pauljan

April 14, 2005, 09:20 AM

I just found out the Gile[s] also suffers from this issue. It is just that they introduce a nice little gradient in stead of my initial crisp normal test (step 1), so that the artifacts are mostly hidden. Turning on what they call 'toon rendering' made this very very clear.

I am still pondering on Victor's answer, and I don't seem to understand it :S Where does 'moving the intersection point until the shadow ray barely misses the edges of the shadowed triangle' get me? What should I do with that intersection point? The longer I think about it, the less I seem to get what you mean there, sorry. I am very sure it is me, so if you could explain this a little bit further it would be very much appreciated!

 
Victor Widell

April 14, 2005, 07:19 PM

Sorry I wasn't clear. I guess my answer would be more understandable in the context of a traditional raytracer.

When you lightmap the the surface you get points on surface, from where you trace shadow rays. The artifact you experience is due to the fact that the triangles cause self shadowing in a way that does not match the original, perfectly curved, surface. The light is blocked by the edge between the current triangle and it's neighbours. Edges that simply don't exist on a true curved surface. To solve your problem, you can "look around the courner" by moving the self shadowed points (shadowed by the current triangle) along the normal, until the point no longer has it's own triangle in the line of sight.

You could also simply skip checking the current triangle and it's neighbours against the shadow ray. A bit more complex, depending on your datastructures etc.

 
RAZORUNREAL

April 15, 2005, 02:18 AM

Very sorry if I missed the point entirely (happens frequently) but shouldn't you dot product the the normal with the direction to the light source not just check if they face the same way? It won't entirely fix your self shadowing problem, but what you could do about that is only test rays against back facing triangles. That should theoretically work perfectly, as long as you don't do anything tricky with your geometry.

 
Goz

April 15, 2005, 04:44 AM

I may be totally missing the point but are you not performing this lightmapping per vertex? Is this not your problem??

Games like Quake would divide a given triangle into a number of sub "patches" that could then be lightmap tested as you describe. This data was then directly coded into a texturemap (more specifically a LIGHTmap) and then that lightmap is then textured over the circle. Sure ... you still get texture aliasing but from your screenshots you REALLY don't appear to be lightmapping just doing a per-vertex pre-calc...

 
pauljan

April 15, 2005, 10:27 AM

Victor: Aha, now I get what you mean. But I don't think it is the face that is self-shadowing (I already excluded it from the ray-test), so I'll try if I can make 'skipping the neighbours' a feasible solution. No, my data structure is not exactly perfect for that :)

Razor: It is friday afternoon so I am not exactly sharp and focused at the moment, but I can't find anything wrong with what you are saying. I'll try a culling ray-triangle-test and see how that works!

Goz: That's just the screenshot, I kept it as simple as possible to clearly display the problem. In this particular test, a a 32x32 lightmap gets mapped on every triangle of the sphere. If you want to, I can add some shadowcasting objects, multiple lightsources (including spotlight cones) and enable dynamic subtexture sizes and box filtering on the lightmap to the scene, but that wouldn't exactly help in identifying what is wrong :D

 
Ray

April 16, 2005, 06:20 PM

Well, show some code. You shoot a ray at a triangle and get the point of intersection.
Now, how do you calculate the normal for that point you just intersected?

 
juhnu

April 16, 2005, 11:16 PM

maybe you could calculate the curvature of the face and calculate the exact point of collision (light/surface) using that. It would save you from the selfshadowing problems.

juhani

 
Willem

April 17, 2005, 02:45 AM

juhnu wrote:

"maybe you could calculate the curvature of the face and calculate the exact point of collision (light/surface) using that. [...]"

That wouldn't work, as the problem is not only the position from which you shoot the shadow ray, but also that the real smooth surface you're testing your shadow rays against is still a triangular mesh. If you want to solve the problem completely you would need to generate the real smooth surface from the triangular mesh (subdivision surfaces are good for this) thereby abandoning all the benefits of using triangular meshes in the first place.

RAZORUNREAL's suggestion definitely works in case of convex objects; however, in a more general setting you will run into the same problems again. The results will -in general- not be correct, because you're using the normals of the smooth surface to do one thing but you're still testing against a triangular approximation of it.

It's like trying to pretend your bicycle is a motorcycle by putting motorcycle tyres on it, it's asking for trouble ;)

Willem

 
pauljan

April 17, 2005, 02:33 PM

Thanks for all the input on the forum and through e-mails guys, especially you Willem!

I choose not to implement Razorunreal's suggestion because I do need support for non-convex objects (non-closed, and whatnot, whatever those wicked users feel like brewing in the triangle-soup pot :P).

However, it turned out my code contained a bug in the calculation of the influence of the angle of incidence on the light intensity (notice the too-crisp edge between diffuse and ambient in the screenshots). Fixing that caused the light to start decreasing from diffuse to ambient a lot sooner (duh), adequately hiding the problem. Which is exactly the behaviour I notice in programs like Gile[s], so for now I am happy.

At some point in the future I'll probably want to implement a somewhat more advanced lighting model than what I am currently using, but I'd like to do that in combination with a real raytracer. But do not worry, I promise I won't post any IOTD's of it :D

 
RAZORUNREAL

April 18, 2005, 08:20 AM

Ok, if you don't want the geometry restrictions you could just record if the ray has intersected with a front facing triangle. Then if it doesn't intersect with a back facing one check whether the triangle you are generating the light map for is front facing or back facing. If it's front facing, shadow it, otherwise leave it up to the dot product to do it for you. That should work for any old triangle soup.

 
This thread contains 19 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.