Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / Game Design & Programming / Techniques used for large 3D environments and limited precision coordinates Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
dragonmagi

April 10, 2005, 07:07 PM

I'd like to know what techniques people use to deal with limited precision problems. For example, if you have a virtual space that is hundreds of kilometers in size, movement and interaction in the environment can get a little shakey because of limited coordinate precision.

I'm aware that portals and tiling methods are used, but these could be employed more for managing complexity - it may not be for precision reasons. Does anyone encounter the precision/jitter problem when trying to build large continuous environments? If so, how do you deal with it?

thanks,

dm

 
Lennox

April 10, 2005, 07:12 PM

Basically, what you want to do is always use your current viewing position as the origin. Everything else is rendered relative to this, so the closer you get to an object, the more accuracy the object gets. Its explained in detail in the paper from the link below. Check out article 3.

http://home.comcast.net/~s-p-oneil/links.htm

 
dragonmagi

April 10, 2005, 09:13 PM

Thanks, that is the sort of thing I was looking for.
There must have been plenty of others with similar problems and similar solution but I have had difficulty digging up references just by google searches.

 
Lennox

April 10, 2005, 10:26 PM

No problem, its not really common in the game world for this problem (unless you're doing a space shooter). Its more of a GIS issue. Let me know if you have any more questions on it, latas.

 
dragonmagi

April 10, 2005, 10:47 PM


Perhaps an example might explain what further questions I have:
In Morrowind, the overall game world is quite large. If I try walking from one town to another there usually comes a point in the middle of a path when all motion stops and you see a brief message like "loading external environment". You have reached an invisible line where more detail has to be loaded before it comes in visible range. Obviously this is loading the next blocks of terrain and models in some kind of tiled system.

Now, this is a method of handling complexity/level of detail but they must also do something with the coordinates of the viewpoint or smoothness of motion will degrade and everything will progressively become more jittery. So something like Oneil's technique - a resetting of the reference frame perhaps - must be done.

Similar things must apply in online games like WoW which also have a large space to play in.

Speaking of Oneil, am I to understand that he moves both viewpoint and objects using double precision coordinates and, when performing some object mostion near camera, he temporarily remaps the coordinates of object and camera by subtracting camera position, then performs matrix calc/operation, then retransforms them back? I'll have to go over the code.

 
Lennox

April 10, 2005, 11:05 PM

I can't recall per say, but i'll enlighten you on the method I typically use for maintaining object accuracy while still being able to use vertex buffers. I think they are similar, if not the same. I don't have sample code to show this, but i'll explain my theory a little.

Lets say you have an object way out in your world with enormous coordinates. Rather than storing this entire model as double precision, translate this model by (tx,ty,tz) such that it is local to one of its vertex (or its center point). In otherwords assign (tx,ty,tz) to be the first point in your model, then for every point in your model subtract (tx,ty,tz) from it. This should center your model around a local coordinate system. Store the translated position as double precision, and store the model in a vertex buffer as floating point geometry (its accuracy is maintained because of the earlier translation).

When it comes time to render the model, first you're gonna treat your double precision viewing position (vx,vy,vz) as the origin (0,0,0). You have to push a translation onto the stack of something like (vx-tx,vy-ty,vz-tz). What this does, is translates your model such that it is relative to your frames origin (0,0,0) which is really (vx,vy,vz). Then you simply render your models vertex buffer without worrying about accuracy issues.

If you think about it, as your view position (vx,vy,vz) moves close towards your models origin (tx,ty,tz), you will notice that the translation you're pushing onto the stack gets closer to (0,0,0). This is the key concept behind maintaining the accuracy really. When you're close to an object, you want it to have the most accuracy.

Hope this helps,

-Lennox

 
dragonmagi

April 11, 2005, 12:11 AM

Yes that helps, thanks. So you do this translation, perform your rendering each time your render each visible object - is that, like N frames per second where N could be in the 100s?

 
Erik Faye-Lund

April 11, 2005, 02:37 AM

storing 16bit coordinates (or even 8bit in some cases) and doing the tnl in full 32bit is quite common in ie mobile 3d-stuff where you have to save memory on the vertex-data. just normalize your meshes to the precision you're using (so that every mesh span over the entire range in x, y and z), and compensate in the matrix. ofcourse, store translation etc in full precision.

 
dragonmagi

April 11, 2005, 03:27 AM

"tnl" - translation?
Yes, I expect this type of problem is more of an issue with the smaller devices especially if one is trying to emulate some of the "big world" stuff.

So you just perform some scaling to fit coords into 16 bit? Or do you also do some type of coordinate shifting like in the Oneil article - with your object and viepoint positions kept in 32bit floats?

 
Erik Faye-Lund

April 11, 2005, 04:42 AM

just scale the mesh to fit in 16bit integer coordinates while preserving as much precision as possible, yes. i guess you could try to do something fancy, like float packing in some way (like, storing a common exponent for x, y and z, and using 8/16bit mantissas or something), but i doubt it'll be effective enough size-wise to justify the extra vertex-processing. I might be wrong here, though.

 
dragonmagi

April 11, 2005, 04:58 AM

Just a note on that: scaling does not actually solve the problem I was speaking of. As Oneil pointed out, scaling down loses accuracy at the finer detail. The intent behind moving things close to the origin, in a floating point coordinate system, is to gain the high fidelity that is available from fp in the vicinity of the origin.

 
Erik Faye-Lund

April 11, 2005, 05:24 AM

uhm, i'm not sure if you understood what i meant ;)

high_precision_x = float(x) * mesh_scale_x;
high_precision_y = float(y) * mesh_scale_y;
high_precision_z = float(z) * mesh_scale_z;

You don't loose any precision worth mentioning here, but you save on the space on storing them. This can be done with a matrix and using the datatype of GL_SHORT for vertices in OpenGL. Some precision is lost in the float -> short convension, but this is minimised to a quite tolerable low level by scaling the mesh to the precision range.

 
dragonmagi

April 11, 2005, 07:22 AM

If I have a 2m avatar and want 1mm accuracy from (0,0,0) to radius of the earth then single precision floating point gives me accuracy to 1mm up to about 2km from the origin. at the radius of the earth it is about 0.7m. But I need 1mm everywhere. This is the problem this thread is about and although I know how to deal with it mathematically I wanted to know how others deal with it in games.

If I scale everything down by 10 then my avatar size is 0.2m and the accuracy I need is 0.1mm. Scaling has not changed the problem: I want the same accuracy near viewpoint *relative to the size of the environment* (earth dimension) throughout the environment. So, scaling changes nothing wrt my question - that's what I was trying to say :)

 
Wernaeh

April 11, 2005, 07:46 AM

Why not add an additional integer vector to all your floating point coordinates, then slice the space into cubes, similiar to an octtree?
All floating point values are then held as relative to the cube specified by the integer coordinates. Before doing any kind of collision detection or stuff, just see that all objects are transformed into the same cube. Keep the cube size low enough so that no precision issues arise within a cube cell and also not for transformed objects of all neighbour cells. Insert objects into the cubes after each movement, which is very simple and not too time consuming for axis aligned cubes (usually boils down to three compares for each moving objects coords).

Hope this helps,
Cheers,
- Wernaeh

 
Erik Faye-Lund

April 11, 2005, 07:50 AM

so, you're looking for LOD-schemes? then say that ;)

just stuff a switch-node in your scene graph, and make it switch meshes on distance. simple, and max has direct support for mesh-LODing.

the 1mm accuracy stuff is still possible with scaling the meshes, you just need to split the world into a set of meshes that makes sense. make 1mm become one unit in your coordinate system, and subdivide your scene into meshe until you have all polygons within the range you need. also remember that the precision of floatingpoint depends on the size of the number. so, small values have high precision, while big numbers have lower precision.

 
Lennox

April 11, 2005, 08:29 AM

Yes, the translation is done every frame, and N can get that high.

If you're dealing with a static scene, then I believe somebody mentioned below that you can translate all models based on your spatial indexing. For example, if you use a quadtree, translate all things to be relative to the quadtree centerpoint. Before rendering that quadtree node, do the same translation that I mentioned before. This method will allow you to not have to pop n translations each frame, but rather n translations per node.

 
dragonmagi

April 11, 2005, 09:05 AM


To Wernaeh and Lennox: Ok, is this what some game systems do?
My guess is that games like Morrowind probably do break it up into
chunks with local reference frames. I'll go find a MW forum and ask.

 
Lennox

April 11, 2005, 09:22 AM

Don't know about games, but I use it in 3D GIS applications (which is similar to games). Most of the time, we are focused on accuracy, so some of the other game tweaks we might not be useful. I use both methods whenever possible. In the case I don't do any spatial subdivision, I use the first method I told you. In the case where I have spatial subdivision, I use the latter version whenever possible. The concept in both are the same though.

 
Dr. Necessiter

April 11, 2005, 11:47 AM

In flight sims we have to deal with this all the time. Some of the suggestions here are right-on.

Basically the problem is the graphics subsystem works in low precision, but you want to store POSITIONAL information in high precision. -> ONLY POSITIONAL information

 
Arne Rosenfeldt

April 11, 2005, 11:55 AM

>>>
Many times this simply means subtracting off the viewpoint's position from the high precision positional information.

I happen to break the world into 1 degree tiles, and then subtract off the anchor point of the tile the viewpoint is currently in.

 
Arne Rosenfeldt

April 11, 2005, 12:10 PM

Using a scene graph
nearby objects have coordinates relative to the same node as the viewer
=> high precision

 
dragonmagi

April 11, 2005, 11:30 PM


As far as managing the problem of motion jitter, Arne is right - you would not need to tile. But I imagine tiling serves the double purpose of keeping the number of polygons ata manageable level.

If you two don't mind, can Arne and Dr Necessiter email me on dragonmagi at gmail.com? For my phd research I am surveying what techniquies people are using for this kind of thing and flight sim was one of the areas I was going to look into, along with large scale games and mil sim. My goal is to apply such techniques to a large scale distributed virtual environment.

 
dragonmagi

April 11, 2005, 11:41 PM



Arne Rosenfeldt wrote: Using a scene graph nearby objects have coordinates relative to the same node as the viewer => high precision


Only true if the viewpoint is close to origin (or you perform the subtraction we are talking about first), and and I think only for floating point coordinates.

Also, your scenegraph would have to be constructed with this in mind, e.g. bintree or quadtree fashion. I dothis all the time with terrains. However, using a scenegraph does not automatically give you this local coordinate system - two nodes adjacent in the scenegraph could have parent transforms that place them kilometers apart (but I agree you would not normally do this). Therefore, the problem is independent of whether you are using a scenegraph.

 
dragonmagi

April 11, 2005, 11:50 PM



Erik Faye-Lund wrote: so, you're looking for LOD-schemes? then say that ;)


Because this is nothing to do with LOD - it is entirely to do with positional accuracy: the gap between one representable coordinate and the next. With single precision floats for coords (most common on modern commodity hardware) this gap increases with coordinate size because the precision stays the same (24 bit).

...
the 1mm accuracy stuff is still possible with scaling the meshes, you just need to split the world into a set of meshes that makes sense. make 1mm become one unit in your coordinate system, and subdivide your scene into meshe until you have all polygons within the range you need. also remember that the precision of floatingpoint depends on the size of the number. so, small values have high precision, while big numbers have lower precision.[/i]


yep - I understand u can break it all up into a set of local reference frames. I'm just trying to find out if this is what most people are doing.

 
dragonmagi

April 11, 2005, 11:54 PM



Lennox wrote: Don't know about games, but I use it in 3D GIS applications (which is similar to games). Most of the time, we are focused on accuracy, so some of the other game tweaks we might not be useful. I use both methods whenever possible. In the case I don't do any spatial subdivision, I use the first method I told you. In the case where I have spatial subdivision, I use the latter version whenever possible. The concept in both are the same though.


Ah that is interesting. Can you please email me on dragonmagi at gmail.com?
I'd like to get a concrete reference relating to GIS for my phd survey. I have some already - mainly from the SRI research and work on terravision and geovrml.

 
Arne Rosenfeldt

April 12, 2005, 02:37 AM

Sorry, I have not used anything.

The float64 aproach is not related to
polygon count reduciton by the viewing frustrum (with sume octree or so) or LOD

>scenegraph
OK, I see, it don't work.
An extreme scene would be two big space ships wich dock,
and a small DukeNukem3d jumps from one to the other ship.
Drift between the ships is realistic, jitter not.

 
This thread contains 26 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.