Not logged in, Join Here! or Log In Below:  
News Articles Search    

 Home / 3D Theory & Graphics / Flexible structures for meshes Account Manager
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.

May 01, 2005, 11:45 AM

Hello, everyone. My name is Adel Amro.
To design a mesh data structure (or class) that would be useful for all kinds of 3D programs (raytracing, Direct3D, OpenGL... etc.), we need to consider the following:

1)For Direct3D and OpenGL, you will have to keep two copies of each and every mesh - one for rendering (normally kept in video or AGP memory which are both out of the reading reach of the CPU), and one in system memory for geometry analysis (collision detection, ray intersection, visibility determination... etc.).

2)There are two types of meshes:
A)Static meshes whose vertices' positions remain the same relative to each other throughout the lifetime of the mesh. Examples are walls and terrain (assuming there are no tanks!!).
B)Dynamic meshes. These include skinned characters, water surfaces, clothes... etc. In Direct3D and OpenGL, they are handled quite differently than static meshes - so the data structures should be aware of that distinction.

3)Geometry analysis normally doesn't require access to such things as texture coordinates, vetex colors... etc.

4)Not all meshes have the same components - for some objects we need texture coordinates but not vertex colors and for others it's the opposite. It would be nice if we could use the same structure for both such meshes.

5)For hardware rendering (D3D & OGL), the faces should be ordered by texture in order to reduce texture swapping in and out of video memory. But for geometry analysis, faces should be ordered spacially.

There are other things to consider of course, but these are what I have in mind. The best design I could come up with (which I'm still not very convinced with) is like this:

  2. class CTriMesh  // Let's all just use triangulated meshes to make life easier.
  3. {
  4. public:  // These shouldn't be public, but hey, we're just talking here!
  5.     CVector3*  m_pVB;  // Vertex buffer.
  6.     USHORT*    m_pIB;  // Index buffer.
  7.     CVector3*  m_pNB;  // Normal buffer.
  8.     DWORD*     m_pDB;  // Diffuse buffer.
  9.     CVector2*  m_pTB1; // Texture coordinate set 1.
  10.     CVector2*  m_pTB2; // Texture coordinate set 2.
  11.     // .... etc.
  12. };

This should work for all system-memory meshes. If, for example, m_pNB is NULL, then that mesh doesn't contain normal data. Now the question is: what is the best data structure for a D3D (or OGL) mesh (both static and dynamic)? Let's hear your ideas, people.


May 02, 2005, 09:16 AM

interleaved is good.

struct vertex
float x,y,z;

struct TexturedVertex : public vertex
float s,t;

struct NormalTexturedVertex : public TexturedVertex
float nx,ny,nz;

struct render_object
int materialD;
int mDataStride;
vertex* mData;

this is basicly the way i do it and it works great, a thin abstraction
above my rendersystem.


May 02, 2005, 05:56 PM

Hmmm, the structure you have proposed really isn't very flexible...every time you want a new bit of data, you have to modify the class.

I/you/we could come up with some kind of crazy flexible format system, maybe using some nice OO code, but it's really not necessary...

Here's an idea: for static data, you don't need to access the vertices at all. Just load a big block of memory directly into a vertex buffer. Use some kind of code value to initialize FVF or vertex declaration (or OGL equivalent) in a data-driven way.

IMO your collision data shouldn't use the same data as your rendering data. The needs of the two systems are totally different. Most collision shouldn't even use real polygonal data with, like, vertices and such - in most situations you can use various types of geometric primtives like boxes, spheres, cylinders, convex hulls....

Most applications of dynamic data can be done in a vertex shader - for instance skinning and water animation which you mentioned are easily and effectively done in vertex shaders. So again, your CPU never needs to know or care about the vertex format.


May 02, 2005, 06:59 PM

I think you're going in the wrong direction.
IMHO there can be no such thing as a general mesh that is suitable for different 3D-API's, graphics AND collision detection, AND processing.

In my opinion it's best to make a generic container for all the data you need,
do processing on that generic container,
and transform the result to whatever you need.

A MeshContainer would include positions, normals, uv-mapping, colors, whatever...
Process that in whatever way you like (gOptimizePositions(mMeshContainer); gOptimizeNormals(mMeshContainer); etc.)
But when you need to render it in D3D, create a specific mesh from that container (ie. mD3DMesh = gCreateD3DMesh(mMeshContainer); delete mMeshContainer; )

Same for OpenGL, or something that suits your raytracer or collision-detection-lib.
If you try to do it all in one class you'll just end up with a HUGE pile of code and a lot of if-statements :)

Keep generic things generic,
keep specific implementations... uhhh... well... specific :)


May 03, 2005, 05:15 PM

>the structure you have proposed really isn't very flexible...every time you want a new bit of data, you have to modify the class.

Ok, what I ment is for this CTriMesh class to be a base class to be derived from. As for being able to do dynamic things like skinning and clothes in vertex shaders, doesn't that limit your audience to those who have cards which support vertex shaders? Besides, what if we wanted to export our skinned character to our fancy ray tracer?

>In my opinion it's best to make a generic container for all the data you need,
do processing on that generic container,
and transform the result to whatever you need.

That is exactly what I mean. A very useful feature (apparently made its debut in D3D 8) to facilitate this is multiple vertex streams. Check this out:
class CD3DMesh
Create( LPDIRECT3DDEVICE9 pDevice, CTriMesh* pTriMesh )
pDevice->CreateVertexBuffer( pTriMesh->GetVertexCount * sizeof( CVector3), &m_pVB );
// Then copy vertex positions from pTriMesh to new vertex buffer.
pDevice->CreateIndexBuffer( pTriMesh->GetIndexCount * sizeof( USHORT ) );
if( pTriMesh->GetDB() )
pDevice->CreateVertexBuffer( vertexCount * sizeof( DWORD ), &m_pDB );
// ... same for all components. Then dynamically create the vertex
// declaration.
LPDIRECT3DVERTEXBUFFER9 m_pVB; // Vertex positions
LPDIRECT3DVERTEXBUFFER9 m_pTB1; // Texture coords 1
LPDIRECT3DVERTEXBUFFER9 m_pDB; // Diffuse buffer.
// ... all stuff in CTriMesh.

CTriMesh m_pTriMesh;
CTriMesh m_pCollisionMesh; // A simpler version for collision detection.

This frees us from the constraints of using FVF codes. This works perfectly fine for static meshes. For dynamic meshes, we can manipulate the CTriMesh data directly then inform the CD3DMesh object built on top of it of the update. The latter then, at render time, recopies the data from its m_pTriMesh member to a dynamic buffer and then render.

When it comes to the ray tracer, I don't see why not use CTriMesh. In this way, CTriMesh should work for skinned characters, level data, ray tracing, and everything envolving 3D meshes. So what do you think?

This thread contains 5 messages.
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.