Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Double Precision Meshes Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Richard French

May 07, 2005, 03:26 PM

Hi,

Is there a way to have double precision values instead of single in a D3DXMesh ?

Rich.

 
mentalcalculator

May 07, 2005, 04:02 PM

Who needs double precision meshs ?

 
Richard French

May 07, 2005, 05:08 PM

I do.

I have a massive amount of data that needs to be genereated into meshes ( ranges from 0,0 to 165200, 82900)

Not all of it needs to be drawn at the same time, but subsections should be pre-generated. Users can then request a bounding area and the relevant subsections are drawn. Everything needs to be in real world coordinates because all attribute data is stored in a database.

Rich.

 
Victor Widell

May 07, 2005, 05:22 PM

Divide the data into small chunks, centered around the origin. That way you keep precission, and you have a much handier datastructure for your hardware to process.

And I don't think any hardware actually suports doubles anyway.

 
RAZORUNREAL

May 07, 2005, 05:43 PM

Whether or not the hardware supports it, opengl would let you pass doubles.

It probably is best to break it up. And if floating point is giving you precision in the wrong places, you can try fixed point. But that probably won't give you the range you want at the precision you want either. I personally have the location of my objects in fixed point, so I have a set area (65km) where I get very good precision but then have the relative locations of the vertices in floating point which lets me have alot of detail as the range covered is never very significant. When you come to render, you just find the location of your object relative to the camera using the fixed point values and draw. That gives much better precision than setting the camera and the objects location really huge and drawing. But I'm not sure how it would work for a mesh split up.

 
theAntiELVIS

May 07, 2005, 06:46 PM

Not only are meshes single precision in D3D, but the world space is too. I know this is exactly the answer you DON'T want, but you need to do one of two things:

1: Change the database. Break the data up into mesh sections, and transform them to where they should be in world space as you draw the current subset of data. Remember you only have a single precision world, so you will have to use some kind of sector or node structure that tells you where the user viewpoint is, and translate that to the single precision space. In your case I think you may be stuck with always having the viewpoint at the origin, and then actually calculate mesh subset positions on the fly.

2: Convert your double precision mesh data to D3D single precision mesh data on the fly. But this still means the D3D data will have to be in a single precision world, and you are again faced with something like having the viewpoint at origin and calculating mesh subset positions on the fly.

If this is terrain data, your best approach is to break into square tiles, and transform those tiles to world position. In this case I would use normalized vertex data within each tile, and a world space offset for one corner or the center, whatever. Again you seem to be faced with reprocessing your database.

 
Richard French

May 07, 2005, 07:22 PM

hi, thnkx for your replies....

Razor:
I guess this is one of those things where opengl beats (i use the term loosely) directx. just out of interest how come you can do it in opengl and not directx.

antiElvis:
can you expain why keep the viewpoint at the origin.

Rich

 
Fabian 'ryg' Giesen

May 07, 2005, 08:00 PM

I guess this is one of those things where opengl beats (i use the term loosely) directx. just out of interest how come you can do it in opengl and not directx.


Well, in OpenGL you can pass double-precision floating point data directly to the functions. This doesn't mean that the implementation performs calculations using doubles. I don't know any implementation that does.

All this does is defer the double->float conversion to your OpenGL implementation.

 
Scali

May 08, 2005, 05:05 AM

In OpenGL using doubles probably means that it will fallback to software T&L (using double precision).
You could do the same in Direct3D ofcourse... The result of the T&L should be easily representable in floats because after clipping, all that's left is the polys that are inside the viewport.

Just because something works in OpenGL, doesn't mean it's hardware-accelerated... That's the catch.
In D3D, if the hardware can't do it, you can't do it.

 
Chad Austin

May 08, 2005, 05:25 AM

Nah, it'll just convert to floats before pushing them through the pipeline. It's not THAT stupid. ;)

 
Chris

May 08, 2005, 06:09 AM

Just because something works in OpenGL, doesn't mean it's hardware-accelerated... That's the catch.


OpenGL isn't THAT bad, be reasonable. Once a hardware-accelerated rendering context has been established, the vendors' drivers strive to do everything in hardware they possibly can.

And the "hardware can't do it means you can't do it" paradigm is present in OpenGL as well, namely through the exposition or non-exposition of extensions.

It's also not THAT easy to mix hardware and software rendering at will.

You'll find papers at ATI's and nVidia's that explain to great detail which data types their drivers support in hardware, and which ones will be converted to hardware-supported types on-the-fly. Doubles are amongst the latter.

 
Scali

May 08, 2005, 09:00 AM

Actually I hope you're wrong.
It would be really stupid if all these double-precision versions of the API calls were just there so you could stuff doubles into them without having the compiler casting them to floats for you.
Not to mention that it would break if you really DID put doubles in them, because you needed more precision than floats could give.

So I hope they're not THAT stupid :)

I can't be bothered to look at the OpenGL reference, but I would assume that there is at least some point to the double functionality, and there is some extended precision guarantee when using these functions.
Which would mean that it would have to do at least part of the T&L in software on most current hardware.

Else the discussion would be rather moot, because OpenGL would not offer any more precision than D3D does.

 
Scali

May 08, 2005, 09:03 AM

Chris wrote: OpenGL isn't THAT bad, be reasonable. Once a hardware-accelerated rendering context has been established, the vendors' drivers strive to do everything in hardware they possibly can.


I'm not saying OpenGL is bad, I'm just saying that most hardware only supports float input, so you have to do double precision in software.

And the "hardware can't do it means you can't do it" paradigm is present in OpenGL as well, namely through the exposition or non-exposition of extensions.


Yes, but there is a lot of hardware on the market that has OpenGL drivers, yet doesn't accelerate all core functionality, which either means it breaks, or it is emulated in software. Some of the less common texture wrapping modes come to mind, like border.

You'll find papers at ATI's and nVidia's that explain to great detail which data types their drivers support in hardware, and which ones will be converted to hardware-supported types on-the-fly. Doubles are amongst the latter.


I hope you're wrong, else the whole double support in OpenGL is useless.
But as I say, it's probably in the OpenGL specs somewhere, some kind of minimum precision requirement for processing double precision input.
Else the point is moot, because it won't help the problem of the topic starter any more than D3D does.

 
Chris

May 08, 2005, 09:04 AM

He isn't wrong. It's just an option that the interface gives you as a user AND them who provide drivers. So far nobody on the driver side uses it.

An no, it wouldn't break, but you'd simply notice that doubles DON'T give you improved precision; after that notion, you'd return to floats and (hopefully) solve you precision problem differently.

This is not a stupid decision, but one in favour of speed. There are other platforms that run OpenGL than x86, and those may well use 64 bit floating point as a standard; (p.e. x86-64).
It would be VERY bad if nearly all OpenGL calls had to change in order to support the double precision data types. That's why the interface always supported them, and left it up to the implementation to actually make use of the precision.

 
Chris

May 08, 2005, 09:05 AM

Yes, the point is moot. Definitively.

 
theAntiELVIS

May 08, 2005, 09:55 AM

If you keep the viewpoint at the origin, and "move" the geometry, you get the same effect as if the viewpoint were moving, except your geometry doesn't extend out to the limit of single-precision accuracy. Store a distance value to each part of your mesh data, and "pop" the parts in/out of the scene based on distance.

So when the user "moves" through space, you apply the INVERSE motion to the geometry in the scene.

The pain part here is tracking all the mesh parts' distances in double precision, and constantly updating positions (probably of the mesh part's center point or "local origin").

If you store each mesh part in "local" model space coordinates (vertices relative to the mesh part's own origin), then it's easy: just apply a translation transformation to each part that is currently in the scene. This is easily done by tracking each mesh part's origin in world space (which you are already doing to track the distance, anyway).

Otherwise, if you truly MUST have the mesh vertices in world space, then you have to update the X/Y/Z of EVERY vertex EVERY frame for each mesh part in the scene - which will eat a lot of CPU time for a lot of vertex data.

 
Scali

May 08, 2005, 10:12 AM

An no, it wouldn't break, but you'd simply notice that doubles DON'T give you improved precision; after that notion, you'd return to floats and (hopefully) solve you precision problem differently.


It would 'break', as in, it wouldn't give the expected results, like I already said.
How exactly can you tell then, other than running on every possible PC and visually detect if the results are okay?

This is not a stupid decision, but one in favour of speed. There are other platforms that run OpenGL than x86, and those may well use 64 bit floating point as a standard; (p.e. x86-64).


It doesn't depend on the CPU running it, it depends on the hardware accelerator in the system. FYI, 64 bit precision is already the default with x86 in most OSes. All maths are processed via double precision with the FPU.
Doesn't have much to do with floats or doubles though. Just because you have a 64 bit CPU doesn't mean you stop using bytes either, does it? Sometimes you just don't need more than 32 bits to store a value.

 
Fabian 'ryg' Giesen

May 08, 2005, 10:26 AM

"The GL must perform a number of floating-point operations during the course of its operation. We do not specify how floating-point numbers are to be represented or how operations on them are to be performed. We require simply that numbers' floating-point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1 part in 10^5. The maximum representable magnitude of a floating-point number used to represent positional or normal coordinates must be at least 2^32; the maximum representable magnitude for colors or texture coordinates must be at least 2^10. The maximum representable magnitude for all other floating-point values must be at least 2^32. [..] Most single-precision floating-point formats meet these requirements."

(http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node11.html#SECTION00510100000000000000)

In other words, the spec doesn't guarantee anything beyond what is single precision on IEEE floating point machines.

 
Scali

May 08, 2005, 10:27 AM

Yes, you're right, I was just looking it up...
Which means the double precision arguments in OpenGL are useless, so either way, you'd need to implement your own double-precision code to process such meshes.

 
Chris

May 08, 2005, 12:00 PM

FYI, 64 bit precision is already the default with x86 in most OSes. All maths are processed via double precision with the FPU.


Plain wrong. Direct3D explicitly switches the CPU to single-precision mode when it starts. And internally, the CPU may use even up to 80 bits.

Doesn't have much to do with floats or doubles though. Just because you have a 64 bit CPU doesn't mean you stop using bytes either, does it? Sometimes you just don't need more than 32 bits to store a value.


Right, but for a 64-bit oriented CPU it might mean a severe penalty in terms of memory performance if you force it to work on 32 bit data. Likewise you don't usually access single bytes on a current 32 bit machine, but strive to access entire doublewords at a time.

Thus, if the OpenGL interface artificially restricted itself to 32 bit floating point times, that would certainly not have been a smart move. Especially since 64 bit floating point has been around for many years.

 
Scali

May 08, 2005, 02:21 PM

Chris wrote: Plain wrong. Direct3D explicitly switches the CPU to single-precision mode when it starts. And internally, the CPU may use even up to 80 bits.


Yea, it switches, because it's not the default (I said OS, and D3D is not an OS) :)
It may use 80 bit yes, but I thought we were talking about the default, which is double precision.

Right, but for a 64-bit oriented CPU it might mean a severe penalty in terms of memory performance if you force it to work on 32 bit data. Likewise you don't usually access single bytes on a current 32 bit machine, but strive to access entire doublewords at a time.


None of this has to do with the 3d hardware though. It looks like we'll be having 32 bit float processing in GPUs for a while yet, 64 bit CPUs or no.

 
Chris

May 08, 2005, 02:26 PM

It's you who started the argument about OSes using 64 bits, not me. I'm perfectly aware that D3D isn't an OS, thank you very much.

BTW the default IS 80 bits internal precision. It's only the fact to memory does not like 80 bit alignment that lead to the fact that nobody stored floating point values in 80 bit sized data types. Borlands compilers could do it, but Microsoft never supported it.

You load and store using 64 bits of percision, but calculations on ST(0) to ST(7) are performed using the internal FPU precision. This precision is influenced by the significant mantissa length, which can be set to 24, 53 or 64 bits, corresponding to 32, 64 or 80 bits data length (-> FLDCW instruction).

Anyway, I'll rest my case here. I argued in favour of OpenGL's interface by looking at what CPU/OS future brings us, and in favour of the current decision not to support 64 bit data even though the interface does, by looking at what current CPUs/OSes support.

 
Scali

May 08, 2005, 02:46 PM

Chris wrote: It's you who started the argument about OSes using 64 bits, not me. I'm perfectly aware that D3D isn't an OS, thank you very much.


Well, you started talking about how eg x86-64 would be using double precision by default... so I said that x86 was already doing that, at least, in Windows... and I believe linux does too, but I'm not 100% sure about that.

BTW the default IS 80 bits internal precision. It's only the fact to memory does not like 80 bit alignment that lead to the fact that nobody stored floating point values in 80 bit sized data types. Borlands compilers could do it, but Microsoft never supported it.


  1.         char* precision;
  2.  
  3.         if (cw & _PC_24)
  4.                 precision = "_PC_24";
  5.         else if (cw & _PC_53)
  6.                 precision = "_PC_53";
  7.         else if (cw & _PC_64)
  8.                 precision = "_PC_64";
  9.  
  10.     MessageBox( NULL, precision, precision, MB_OK );
  11.  


Returns _PC_53 for me (Windows XP Pro, 32 bit, Athlon XP).
So unless my system is weird, Windows uses double precision by default, not extended.

You load and store using 64 bits of percision, but calculations on ST(0) to ST(7) are performed using the internal FPU precision. This precision is influenced by the significant mantissa length, which can be set to 24, 53 or 64 bits, corresponding to 32, 64 or 80 bits data length (-> FLDCW instruction).


Duh, as I said, that is _PC_53 by default.

Anyway, I'll rest my case here. I argued in favour of OpenGL's interface by looking at what CPU/OS future brings us, and in favour of the current decision not to support 64 bit data even though the interface does, by looking at what current CPUs/OSes support.


I understand your point of supporting doubles, but it is very strange that the OpenGL specs don't guarantee anything past float precision, regardless of what arguments you use... and there seems to be no way to detect the actual precision of the current system.

 
Reedbeta

May 08, 2005, 03:12 PM

The double precision arguments aren't useless in general, only with current drivers and GPUs. The spec doesn't require more than single-precision, but it's easy to imagine (as you've mentioned in previous posts) a software reference rasterizer that uses full double precision, or that in the future GPUs might use double precision floats.

 
Fabian 'ryg' Giesen

May 08, 2005, 04:53 PM

Erm, 64bit CPU means "64bit Address bus", not 64 bit Data bus!

x86-Family processors have had 64bit Data buses since Pentium MMX and 128bit since P3. And in any case, memory bandwidth is more of an issue than what word sizes the CPU can natively access.

 
RAZORUNREAL

May 08, 2005, 05:45 PM

Nice thread hijack guys. I'm really sorry I ever mentioned opengl, I knew it didn't do what he wanted.

 
This thread contains 26 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.