Rai July 06, 1999, 10:17 AM 

>Dima Michaelov wrote: >If I understand it right, you are doing backface culling before you even apply world transformation to your vertices. Your normals are Ok because they don't change when you backtransform *viewing* vector into the object's coordinate system. I suggest that you check if the vectors you are calculating dot product from are both unit vectors (that is, have length of 1.0, e.g. normalized). Also check whether the matrix you are applying on the viewing vector is the right one and is built in right order. >As far as I know, homogeneous representation has nothing to do with your miscalculation. And normal vectors normally ( ;) ) don't get divided through z. > >Hope that helps. Have fun solving. :) >Dima Michaelov ( a.k.a. Dj Cloud )
Hi Dj Cloud!!
Thanks for your message! But, I guess that we have a little misunderstanding here, which I would like to solve.
YES, I do backface culling BEFORE I apply world transformation. Sounds weird, but the idea behind this is simple. Think about thousands of polygons, which you all have to transform and after which you would do your backface culling. In effect, you throw away at least half of the faces, since they face away from your view point. Let me give an example. Your face normal would be rotated 10 degrees around X, then 20 deg around Y and finally 30 deg around Z. Then, you would scale/translate it. Maybe, you do all this in one matrix, with 4x4 homogenous coords. Anyway, you finally test the rotated/translated normal against your view vector (dot product), which normally points along the positive zaxis ... a lot of calculations for a face which isn't visible anyway, don't you think? Furthermore, the dot product only checks angles, the applied translations are therefore wasted. Let's make that more efficient: Test all original normals (which are calculated in object space) against an INVERSELY rotated view vector. Your transformation order for this inverse rotation would be rotX(10), rotY(20), rotZ(30). Now, rotate your view vector (which points along pos. Z axis) like this: rotZ(30), rotY(20),rotX(10). Now, you will have the same result as with the first method, but you will save a lot of calculations since the inverse rotation is applied only once to the view vector, not to all the normals.
Anyway, you now have ruled out all polygons which are facing away from you. The remaining polygons are processed further. And here is the part where my initially described problem comes in, I think. Consicer a face, parallel to the left screen boundary. It's normal would point perfectly along the negative X axis. This polygon remained in the list and is processed further. After the perspective divide, the polygon's far edge is bent to the center of the screen. Such a polygon definitely is not visible, since you had to rotate it a while until it would be parallel again to the left screen boundary. Just to say this again, it's just a guess, I don't know more right now. Definitely, it's not an artefact from wrong martix calculations or something, since the wire frame model behaves fine. I could send you a demo, if you like. You should check it out for yourself ... .
Anyway, thanks for your comments, Rai.
