Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / General Programming / Homogenous coordinates vs. R3 Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Rai

July 02, 1999, 07:57 AM

Homogenous coordinates vs. R3 space representation

Hi to all 3D coders out there, maybe you can help me with a question?

Backface culling in my litte 3D engine works like this:
(1) Test the normals of original, non-rotated faces against an inversely rotated view vector.
(2) If this dot product is 0, then discard entire face.
(3) After clipping/shading, the polygon (face) is drawn to the screen (division through z).

So far, so good. The problem? Here it comes.
The (dot product >0)-test only works fine for cubes (where faces parallel to the cartesian coord system planes).
If I use a pyramid as 3D model, I have to adjust the cut-off value ... dot product > 0.5 or something ... .

I guess that the catch here is the division through z, i.e. the perspective projection.
The face normals will get bent due to this perspective division. Therefore, I need some face normal
representation which takes this effect into account. Now, I heard that in homogenous coords,
face normals are represented differently from other vectors. How is the perspective divide done in
these coordinates?
Jeff Hill (Tutorial section, Linear Algebra) gave a short summary on this, but I need the basics first.
Maybe, someone can help?

Thanks folks & bye,
Rai.


Here is the question:

 
Kurt Miller

July 02, 1999, 01:03 PM



Rai wrote:

>>but I need the basics first. >>Maybe, someone can help?

Alex Chalfin wrote an excellent document on the subject, which you find here:
Homogeneous Perspective Transform

-kurt

 
Dima Michaelov

July 05, 1999, 10:57 AM


Rai wrote:
>>Homogenous coordinates vs. R3 space representation
>>
>>Hi to all 3D coders out there, maybe you can help me with a question?
>>
>>Backface culling in my litte 3D engine works like this:
>>(1) Test the normals of original, non-rotated faces against an inversely rotated view vector.
>>(2) If this dot product is > If dot product >0, then discard entire face.
>>(3) After clipping/shading, the polygon (face) is drawn to the screen (division through z).
>>

Greetings, Rai

If I understand it right, you are doing backface culling before you even apply world transformation to your vertices. Your normals are Ok because they don't change when you back-transform *viewing* vector into the object's coordinate system. I suggest that you check if the vectors you are calculating dot product from are both unit vectors (that is, have length of 1.0, e.g. normalized). Also check whether the matrix you are applying on the viewing vector is the right one and is built in right order.
As far as I know, homogeneous representation has nothing to do with your miscalculation. And normal vectors normally ( ;) ) don't get divided through z.

Hope that helps. Have fun solving. :)

Dima Michaelov ( a.k.a. Dj Cloud )

 
Rai

July 06, 1999, 10:17 AM

>Dima Michaelov wrote:
>If I understand it right, you are doing backface culling before you even apply world transformation to your vertices. Your normals are Ok because they don't change when you back-transform *viewing* vector into the object's coordinate system. I suggest that you check if the vectors you are calculating dot product from are both unit vectors (that is, have length of 1.0, e.g. normalized). Also check whether the matrix you are applying on the viewing vector is the right one and is built in right order.
>As far as I know, homogeneous representation has nothing to do with your miscalculation. And normal vectors normally ( ;) ) don't get divided through z.
>
>Hope that helps. Have fun solving. :)
>Dima Michaelov ( a.k.a. Dj Cloud )

Hi Dj Cloud!!

Thanks for your message! But, I guess that we have a little misunderstanding here, which I would like to solve.

YES, I do backface culling BEFORE I apply world transformation. Sounds weird, but the idea behind this is simple. Think about thousands of polygons, which you all have to transform and after which you would do your backface culling. In effect, you throw away at least half of the faces, since they face away from your view point.
Let me give an example. Your face normal would be rotated 10 degrees around X, then 20 deg around Y and finally 30 deg around Z. Then, you would scale/translate it. Maybe, you do all this in one matrix, with 4x4 homogenous coords. Anyway, you finally test the rotated/translated normal against your view vector (dot product), which normally points along the positive z-axis ... a lot of calculations for a face which isn't visible anyway, don't you think? Furthermore, the dot product only checks angles, the applied translations are therefore wasted.
Let's make that more efficient: Test all original normals (which are calculated in object space) against an INVERSELY rotated view vector. Your transformation order for this inverse rotation would be rotX(10), rotY(20), rotZ(30). Now, rotate your view vector (which points along pos. Z axis) like this: rotZ(-30), rotY(-20),rotX(-10). Now, you will have the same result as with the first method, but you will save a lot of calculations since the inverse rotation is applied only once to the view vector, not to all the normals.

Anyway, you now have ruled out all polygons which are facing away from you. The remaining polygons are processed further. And here is the part where my initially described problem comes in, I think.
Consicer a face, parallel to the left screen boundary. It's normal would point perfectly along the negative X axis. This polygon remained in the list and is processed further. After the perspective divide, the polygon's far edge is bent to the center of the screen. Such a polygon definitely is not visible, since you had to rotate it a while until it would be parallel again to the left screen boundary.
Just to say this again, it's just a guess, I don't know more right now. Definitely, it's not an artefact from wrong martix calculations or something, since the wire frame model behaves fine.
I could send you a demo, if you like. You should check it out for yourself ... .

Anyway, thanks for your comments,
Rai.



 
SLI9000

July 06, 1999, 11:19 AM


>>Thanks for your message! But, I guess that we have a little misunderstanding here, which I would like to solve.
>>
>>YES, I do backface culling BEFORE I apply world transformation. Sounds weird, but the idea behind this is simple. Think about thousands of polygons, which you all have to transform and after which you would do your backface culling. In effect, you throw away at least half of the faces, since they face away from your view point.
>>Let me give an example. Your face normal would be rotated 10 degrees around X, then 20 deg around Y and finally 30 deg around Z. Then, you would scale/translate it.

Well, here's what I know about this

OK, I think translating the normal would be a problem. You don't translate normals b/c even if your mesh is translated all the faces are still facing the exact same direction as before.
If you translate the normals too, then you would be distorting it.

Also, you are doing culling by taking dot products. This is based on the fact that if you take
the dot product of u and v, then you are getting the distance of u projected on to v.
But note that this only works if the vector v is normalized.
So I think you should normalize you're inversed view vector before doing the dot prods if it's not normal already.
You don't need to normalized the face normals though (I think) that would just be a waste of time.

Also you might try another method of (probably faster) culling that doesn't involve the dot product.

First this method works on the 2d polygon coordinates in screen space after the z divide.
And the polygon has to be convex and the vertices have to all be clockwise or counterclockwise
fot it to work.

Here it is:

h0=polygon.vertices[0];
h1=polygon.vertices[1];
h2=polygon.vertices[2];
hiddentest=(h0.x-h1.x)*(h2.y-h1.y)-(h0.y-h1.y)*(h2.x-h1.x);
if(hiddentest>0) then skip this polygon

You might have to reverse the > in the if based on your vertex orientation.
Also, this method works great for triangles, but if you have n-gons, width n>8 or something, it might not work all the time because during conversion into 2d coords, areas of the polygon could just "become" conves because of small errors.

Hope this actually offered some kind of help :)
Ok thanks






 
Dj Cloud

July 08, 1999, 06:33 AM



SLI9000 wrote:
>>
>>OK, I think translating the normal would be a problem. You don't translate normals b/c even if your mesh is translated all the faces are still facing the exact same direction as before.
>>If you translate the normals too, then you would be distorting it.
>>

Greetings, SLI9000!

That's it! That is the exact problem i had when trying to do backface culling in my first
engine. I ran into it and didn't know what to do until I realized that handling backface
correctly *all* the time would require calculating vector from view point to one of the
vertices on polygon in question and do dot product of that vector and normal:

V' = P[1] - ViewPoint
cTheta = V' * N

And *that* is too expensive to do in real-time, because you may also want to normalize the
V' vector. :(

I guess that doing culling in screen space is better and faster. And it ensures that we
do culling only for those polys that are actually in our viewing site. :) (I need to
put this in my notes for my new upcoming engine.)

Thanx for your participation, SLI.

DJ Cloud.

 
tangent

July 14, 1999, 02:10 AM

Dj Cloud wrote:
>>V' = P[1] - ViewPoint
>>cTheta = V' * N
>>
>>And *that* is too expensive to do in real-time, because you may also want to normalize the
>>V' vector. :(

Your method *is* the correct method for backface culling. It is *not* expensive since it
essentially drops a whole face with just three multiplies. Also, the calculation can
be done in object space *or* in view space, but is best done in object space since there
is no need to rotate vertices if this test can be done without the rotate.
The value you have generated: cTheta has a special meaning... is is the shortest
distance to the plane that the face lies on - perfect for collision detection routines!
The only thing you should know is the value you generated is negative for frontfacing polys.
For positive values, simply invert the equation:
DistanceToPlane=(CameraOrigin-AnyVertexOnTheFace) * SurfaceNormalOfPlane

The layman's explanation to why this works is:
To see the front side of a face, you must be in front of it. This may seem trivial and
a bit of an insult, but I was faced with the *exact* same problem when I started coding.
This was my simple answer to the problem.

Trivia: What the calculation does is project a vertex onto an arbitrarily defined axis
at an aribtrary origin. This is exactly 1/3 of a vertex rotate: the rotation into view
space does this for all three axis!

Hope the added details gave others some insites!

 
This thread contains 7 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.