
I've just started doing a software 3dEngine and am doing something wrong when it comes to computing both the depth of a certain polygon and whether a poly is facing the eye.
I have my eye set up so that it is storing its position, X, Y, and Z unit orientation vecors, and two angles in radians also indicating its orientation. Points are oriented to eye space by translating by the negative eye coords, and projecting onto each of the x, y, and z unit vecotors of the eye. The normal of the polygon is translated by projecting onto to the x,y,and z vecors as well.
I project a point onto the uv plane by u = x/z * zoom + X_SCREEN v = y/z * zoom
Now as far as determining the the visibilty of a polygon, I calculate D as
D = CurPos.x * CurNormal.x + CurPos.y * CurNormal.y + CurPos.z * CurNormal.z;
where CurPos is the point on the plane, and CurNoraml is the normal, both relative to the eye. This is the dot product of the two vectors, and therefor, if D>=0 then the plane should not be visible because the parallel component of the dot product is oriented in the same direction as the position vector. So I have the condition:
if (D >= 0) visible = FALSE; return; }
However this has wacky behavior when I attempt to run the engine: the faces disapear and reapper at odd angles.
I also use d in the calculation of the distance of a point on the poly plane from the eye. at U, V I calculate the depth as (SCREEN_X is half the distance accross the screen, v doesn't need correcting)
depth = D/(line>Parent>CurNormal.x * (u  SCREEN_X) / zoom + line>Parent>CurNormal.y * v / zoom + line>Parent>CurNormal.z);
Is there something wrong with my theory, or do I just have some coding error somewhere?
