|See what's going on with flipcode!|
Textures & Vectors & File I/O My
Question submitted by (13 February 2000)
|Return to The Archives|
I am trying to build a simple research engine using DirectX 7 and have run
into a few snags. I have written file i/o functions to load both .MAP files
and .RAW files and parse them into verticies and copy them into my data
structures. Several problems arise.
Neither vertex normals or texture coordinates are saved in these file formats and have to be generated on the fly. How do I calculate vertex normals when I load the file?
When I try to apply a texture to the polygons in my object, since there are no UV coords, it doesn't show the texture. If I fake some coords when I import the file, I get each polygon having the same section of texture displayed on it instead of the texture spread over the multiple polygons. (And I expected that) But how can I easily paint the texture on the polygons as a group instead of individually? I suspect I'll just have to get down and dirty with UV coords. Yuck.
Secondly when I import .MAP files I notice that for a cube it has 6 verticies and expects that verticies are shared. I expect to import 36 verticies 3 per polygon making 12 polygons to make the cube. But that's not what I get and it totaly messes up the .MAP when viewed. How can I fix this?
I seem to recall that .MAP files do not contain vertices, but instead
contain planes that must be clipped to each other to generate a convex
hull. This convex hull is then tessellated to produce triangles. You'll
need to check the Quake tools and docs on the web for more info about them,
but I think what happens is you start with a really huge cube and then clip
the cube to each plane. This will produce additional triangles and reduce
the size of the cube and change its shape at each step. The final shape
will be a bunch of triangles that form a convex hull. When working with
very large numbers you have to be careful you don't run into resolution
problems, but other than that it shouldn't be that hard. You can check out
Sutherland Hodgeman clipping algorithms if you don't already have a
triangle/plane clipping routine. It's designed for clipping triangles to
the view frustum, but it contains the needed triangle/plane clipping.
To generate the normal for a triangle you take the three vertices and compute two vectors using subtraction.
Vector1 = Vertex0 - Vertex1;
Vector2 = Vertex2 - Vertex1;
You then take the cross product of those vectors and normalize the result. The order of operation on the cross product will determine if the normal points "in" or "out". That gives you the normal for the triangle, but not for the vertex. Generating a vertex normal involves combining the normals of the triangles that share the vertex "correctly". The simplest method is just to average them all, but that will give something like a box the appearance of rounded corners. A better method is to dot product the triangle normals together (which gives you the cosine of the angle between them) and only average them together if the result is greater than some threshold. This will preserve hard edges, but smooth the ones that should be smoothed.
Doing this for planar polygons works the same way, you just pick three vertices to work with. Non-planar polygons are a huge mess you shouldn't be working with anyway :)
As far as UV coordinates go there are three simple automatic mappings; planar, spherical, and cylindrical. They are covered in just about every graphics text out there (Computer Graphics Principals and Practice, Advanced Animation and Rendering Techniques, etc). These coordinates can be generated by OpenGL for you by using glTexGen() and I think you can get them back from the API by using the feedback buffer. You can check out the Mesa source code for examples as well.
Response provided by Tom Hubina
This article was originally an entry in flipCode's Fountain of Knowledge, an open Question and Answer column that no longer exists.