See what's going on with flipcode!




 

Dirtypunk's Column - Issue 04 - View Independent Progressive Meshes
by (08 August 2000)



Return to The Archives
Welcome Back


Well, finally the time has come again when I can write this column. That is the time when I have gone beyond my meagre existential plane to create something worthwhile to share with the loving flippers of code out there. To the rest of you I say "you suck", because you aren't going to be reading a flipcode column, so I can say what I like about you.

Anyway, continuing on the topic for this week is view independent progressive meshes. I've been meaning to write this for a while, but something important came up when ever I meant to. Now it's the second week of uni, and everything has settled down, I can again write my column due to having, well excess wasted time before a lecture.


What In The Name Of Yo Momma's Hairy Bunions Are View Independent Progressive Meshes?


Progressive meshing is simply a form of LOD, which is performed on meshes of triangles. Progressive meshing was discovered by Hughes Hoppe, and he presented various papers which are available around the net (mainly at MS research).

A progressive mesh is made up of a series of edge collapses. Each edge collapse takes an edge and makes the points at the ends of the edge the same. This kills the edge, and the triangles (usually 2) which share the edge. In VIPM, these edge collapses are collapsed on an error metric not dependent on the view. (The actual amount of collapses you do may be view dependent).

A VIPM has many advantages. Firstly VIPM is reversible. That is you can Reverse the collapsing on edges that have been collapsed. This means that instead of collapsing all the verts you need to every frame, you can collapse to decrease your LOD, and reverse collapse to increase it.

This brings another advantage to light - You can greatly decrease the amount of time taken to perform the LOD, because LOD only changes a rather small amount from frame to frame. So, it takes advantage of frame to frame coherency.

Two more related advantages are these - You don't need to change any of the vertices, or the vertex order in your vertex arrays at run-time (unless you morph), which means they are great for T&L, only the indexes. This is because the edge collapse order is totally pre-calculated, as is the collapses themselves. Also you can easily apply weighting tools in which the artist can add precedence to various edges/points that are considered important to the aesthetics of the model.

Progressive meshing has a disadvantage in that it takes a little extra storage (for the edge collapses), but this can usually be less than storing smaller versions of the same model for LoD. How much extra data is needed depends on such lovely things like the size of your indices etc.


What Do I Need To Do In The Pre-Processing Stage?


Generally, the pre-process is fairly simple. The first thing you need to do is get some sort of connectivity information. Probably the best structure you can construct for the task is a winged edge structure. This enables you to find which triangles correspond to which edges. Once you have this information you have to do two things. Compute an error metric for every edge in both directions. Most people only calculate the error metric of the edge in one direction, but I feel this can be something to avoid, as determining which point collapses into the other on the edge can be good for storage and for aesthetics.

Which error metric is the right one? Well, there are many different ones. I tend to use a fairly simple one (I will explain in a minute). Many people use more complex metrics which try to save the volume of the model etc.

The metric I use tries to avoid these things. 1) Collapsing edges shared by triangle(s) of a large surface area 2) Collapsing edges with a sharp angle between the two triangles being collapsed 3) Collapsing edges where there is a sharp angle between the edge and the other edges connected to it. 4) the length of the edge.

To solve the first part of the problem, the metric takes the sum area of the 2 connected triangles into account.

To solve the second part of the problem, the dot product between the normalised centroid vector of one triangle, and the normal of the other is used.

To solve the third part of the error metric, we take the other edges connected to the point potentially being removed and take the sum of the normalised dot products between the vectors of the connected edge, and the edge we are testing.

The fourth part is the simplest, and is simply taken on the length of the edge.

This error metric seems to work fairly well - But it is pretty much of my own invention. Anyway, putting the pieces together:

	F = For all symbol
	C = Vectors of Edges Connected to the point potentially removed
	E = edge's vector
	A1 = Area of triangle one
	A2 = Area of triangle two
	X = Centroid vector of triangle one
	N = Normal of triangle 2
	U(x) = Function which normalises vector (x).
	
	((A1+A2)*(N.U(X))+(FC do (1-N.U(C)))) * (|E| + 1) 


The final stage is to sort the error metrics, take the smallest and collapse the corresponding edge (saving the collapsed structure of course). Recompute the error metrics of the other edges changed by the collapse, and then resort the metrics (only has to be a partial sort). Remember, the data you are using to test your collapsing must by modified to reflect the collapse, but this data is only temporary.


What You Need To Save Per Collapse During Pre Computation


The data structure you save with each collapse should be this - Pointers to 2 triangles which will collapse, pointers to triangles using the point removed by the collapse and pointers to both the points on the edge.

Typically this data is stored during the stored in the winged edge structure. Important to note that the winged edge structures are modified during pre-process with each collapse, to add in connectivity information themselves.

These collapses are saved in the order they happen (a stack style structure could be good). This is because all the vertices and triangles we have are going to have to be reordered (for efficient use at run time).

Vertices and triangles are ordered in the way they are removed (this makes things much quicker at run time). The ideal way to do this is first order the vertex array (which can be saved to disk in that order), then remap the new indices of each vertex. Each vertex contains its own index id, related to the order they are sorted into. Triangles are saved as the index of 3 verts into an index array (which can be saved to disk).

Finally we must store the edge collapses themselves. This is the tricky part of the operation. The first part of the edge collapse is the index to the point which remains after the collapse. The second part is a counter to how many triangles were modified and the third part is a set of indexes to the index array (these need to be changed in the collapse operation). How to get the third part to store is even more confusing to put into words than the last couple of bits in this document, but is relatively easy to do, so I'll leave it up to the reader.


What Do I Do At Run-Time Already?


Well the important thing to remember is that you get rid of the triangles and the vertices in the same order they are stored in the arrays. So, we much maintain counters. Each edge collapse means we increment the start to the vert array by one, and the start to the array of triangles by 6. For edges de-collapsing we must decrement.

The collapse itself does this - Take your collapse structure, and for each index array modification, all you need to do is change the value to the index to the vert that wasn't destroyed with the edge (the first part of the edge collapse structure).

To reverse the collapse, we simply change the values indicated by the collapse structure to the newly decremented start to the vert array (which corresponds to the point destroyed in the collapse).


This Is All Very Complicated, Confusing And You Have Explained It Badly!


Read it a couple of times, and try to visualise it. Describing the practical implementation of VIPM is rather hard, but this article should hopefully give you an idea.

If I get enough people writing to me saying "I didn't understand" I will knock up some sample code with comments which should sway the tide.

But, here for the purpose of aiding understanding - are some links to some nice aids:

Hughes Hoppe's webpage (the inventor of progressive meshes):
http://www.research.microsoft.com/users/hhoppe/
Lots of published articles on progressive meshing/LoD in general.

Charles Bloom's site has a VIPM article:
http://208.55.130.3/3d/
Nice resource for people wanting to learn intermediate 3d graphics knowledge.

Tom Forsyth's Download Site:
http://www.muckyfoot.com/downloads/tom.shtml
Tom Forsyth is a bit of a visionary when it comes to LoD, and the slides provided on this site cover not only VIPM, but a myriad of useful topics.

Conor "DirtyPunk" Stokes


Article Series:
  • Dirtypunk's Column - Issue 01 - Hardware Rendering Optimizations
  • Dirtypunk's Column - Issue 02 - Line Dualities: Plucker Space And You
  • Dirtypunk's Column - Issue 03 - Visibility Theory
  • Dirtypunk's Column - Issue 04 - View Independent Progressive Meshes
  • Dirtypunk's Column - Issue 05 - AABB Trees - Back To Playing With Blocks
  •  

    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.