Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / General Programming / Replay system and numerical differences, madness Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
rneckelmann

May 07, 2005, 04:11 AM

Hello ppl!

For some time ago I mentioned my work-in-progress Elastomania-clone, asking how I best could implement an online "best times" system that was difficult to cheat. The conclusion was to let people upload their replay-files to the server, which would then play them back to conclude on a finish-time -- all automated of course.

Fine enough. The game has come a long way since then, and I actually thought I had a working replay system, until I begun the process of testing the game on my linux box (I'm a VC.NET-whore :|).

*Sigh*

I can only play replays correctly on the same platform/build-type that recorded it; i.e. replays made by the linux version won't reproduce on windows, and windows "release" replays won't work when played by windows "debug" builds, and so on. In the best cases, the finish time is maybe a couple of seconds wrong, in the worst cases, the player bangs his head into some obstacle he didn't originally hit.

As you might already have guessed, my replay system simply stores all user input on a per-frame basis (the game runs on fixed framerate); when looking back, of course it was naive of me to believe that all calculations would produce exactly the same results on all platforms.

Now for the question: Is there a neater way to do these replays?

Some far fetched ideas of my own are:

- Make my own floating point library and don't use FPU (hehe, no way)
- Port all calculations to integers only (yeah right)
- Have some kind of "key frames" stored in the replays. But ultimately it would still lead to differences, it would just be less apparant.
- Drop the idea of supporting replays across platforms, disintegrating the neatness of an automated "best times" system :(

Any thoughts? Is it impossible?

Regards,
Rasmus Neckelmann


 
Chris

May 07, 2005, 04:26 AM

I fear that FPU calculations are incoherent across platforms, and especially compilers.
VC.NET alone has three different FP calculation modes (fast, precise, whatever, ...) and there's little chance that gcc matches one of them.
Even across compiler versions things could change.

Key frames won't help, because key frame data is only the result of previous FPU calculations. So you would probably notice rather sharp changes in location and orientation of the objects when playing back on another platform.

Even worse, when replay on a different platform leads to destruction of some object that the replay file assumes to still exist because on the original platform it didn't get hit, you would have to recreate it when the keyframe is reached. That gives all sorts of headaches, I think.

So I bet fixed-precision integer math is the only way out, yes.

 
rneckelmann

May 07, 2005, 04:35 AM

Okay, exactly as I feared.

I'm really not going to port anything to fixed-precision integer math - I'm simply not that dedicated to this project (or bored :P). I'll scrap the idea of an automated best times list, and simply make it more "ad hoc".

If people want to get on the best times list, they have to use a "well known" build of the game so verify the results manually.

This is turning a bit obscure, I'm afraid :(

 
fman256

May 07, 2005, 05:07 AM

You could store the position/velocity at each frame. Then to verify the replay, you can run the physics on the previous position, an comare the result with the next position, and if it meets some tolerance, like the values are within 99% of eachther 99% of the time, you can consider it valid.

 
rneckelmann

May 07, 2005, 05:50 AM

That COULD work. :)

The obvious problem is the size of the stored data... ALL physics states have to be saved, that is at least 140 bytes per rigid body (and compared to the 3 bodies (wheels + head) of elastomania, I have a hell of a load more: moving scenery, elevators, etc etc). And that's per frame. Copying all these states will probably slow everything down considerably... of course it isn't necesary to do that real-time - I could simply store only the input states as I do now, and if the player decides to save the replay, the simulation is run again in the background, but this time all states a saved.

And the good part? :) All this background simulation code is already up and running, so I only need to implement the "state saving/loading" part.

I'll have to try it out :)

But again, this is turning a bit too obscure to my liking

 
Wernaeh

May 07, 2005, 08:46 AM

How about cutting all float values down by some precision ? Say you round all position and speed values to the next 0.25f after each calculation (both on the server and on the client)? I think quake 2 did something similiar in its networking code. This would probably diminish different floating point errors.

Cheers,
- Wernaeh

 
Chris

May 07, 2005, 09:34 AM

I think a fixed-point number class with overloaded operators could quickly drop in and replace float as a numeric type. I don't think it'd be much of a hassle to implement it.

And effectively, using fixed-point does exactly what Wernaeh suggests: clamping floating point values to an artificial coarse precision, p.e. when using 16:16 format, any float differences beyond 2^(-16) ~ 0.000015 would disappear.

 
Fabian 'ryg' Giesen

May 07, 2005, 11:49 AM

I think a fixed-point number class with overloaded operators could quickly drop in and replace float as a numeric type. I don't think it'd be much of a hassle to implement it.


Dead wrong. Of course writing such a class is not much of a problem, but it is never a drop-in replacement for floats. Most importantly, you'll absolutely never get around rewriting your calculations completely if you want any reasonable precision. Proper fixed-point maths involves lots of different internal fixed-point formats and precision considerations.

Case in point: Vector normalization. With floats you'll probably have something like:

  1.  
  2.   void MyVector::Normalize()
  3.   {
  4.     float lengthSquared = x*x + y*y + z*z;
  5.  
  6.     if (lengthSquared)
  7.     {
  8.       float scale = 1.0f / sqrt(lengthSquared);
  9.       x *= scale;
  10.       y *= scale;
  11.       z *= scale;
  12.     }
  13.   }
  14.  


When you just naively replace that by 16.16 fixed-point arithmetic, you'll have at least the following problems:
- For |x|, |y|, |z| >= 181.019348 (0xb504f4 in 16.16 fixed point), the products x*x will overflow (assuming signed integer arithmetic).
- For |x|, |y|, |z|

 
Chris

May 07, 2005, 02:45 PM

That is perfectly clear to me; anyway, I think it's a nice summary of the different caveats of fixed point math.

I thought it to be used a bit differently. He could still do floating point math during his in-frame calculations. I assume that significantly different results don't occurr in a single frame, but due to accumulation of error across many frames.

Only when it comes to storing and retrieving keyframe data, he'd convert FPU data to fixed-point data, and back.

I think that would hide the small differences across varying compilers, and working on keyframe data could be made easy by operator overloading.

 
rneckelmann

May 08, 2005, 07:20 AM

There was some talk on the ODE mailing list (yeah, I use ODE), about someone was trying to make a fixed point version of the library. Unfortunately the guy ran into a large pile of problems, so he abandoned the idea all together. As far as I can see, his main problem was to maintain precission in complicated calculations - in each and every place where something was calculated, he had to consider what happened to the precission, and take appropriate action. A hell of a mess I think. (Physics simulations are very sensitive to numerical errors; it's difficult to keep everything stable)

Right now I'm really not up to spending a lot of time on this. I'd rather get the game into a completely playable state, before I start on stuff like this. If I spend the next month working with a boring thing like math precision, I'm afraid I'll lose interesest in the project and never finish anything at all. :)

I've come to the conclusion that I can't accept ANY numerical differences when working with these replays; even the TINIEST difference can cause the replay to fail with a different result than the original. So I guess the only viable solution is to store the entire state as described in an earlier post (and suggested by fman256). But the storage size scares me :(

At the moment I'll limit replays to specific compiler configurations...

Thanks for the feedback

 
Wernaeh

May 08, 2005, 07:33 AM

Hi again,

Perhaps this is kind-of a stupid question, but what keeps you from just doing what I suggested - that is, rounding all position and velocity values to the nearest .25 or something ?

Cheers,
- Wernaeh

 
rneckelmann

May 08, 2005, 08:26 AM

Wernaeh:

My first guess is that everything will turn extremely unstable and blow up :)

I want a fairly precise approximation of when objects collide - that is, I need good small timesteps. And small timesteps equal only small changes each frame; rounding these to nearest .25 (or something) will make everything smaller than .125 (or something) disappear. And that is just one thing.

The rounding should be to something like .000001 (or something) if it should still work reasonable well - and then I still can't be sure there isn't any differences.

By the way, are anyone aware of any open source/cross platform games that feature replays? It would be interesting to see how other people handle this.

 
Danny Chapman

May 08, 2005, 09:03 AM

If you decide to store the state - you don't need to store everything every frame. Either store things every tenth of a second or so, or else only store things when the state has changed by more than a certain amount. Then use interpolation during playback. So, I wouldn't have thought storage size would be a big deal. This doesn't address the cheating issue that you started with, though. Maybe you could store the control information as well, and use that to do some basic sanity checking of the state info...

 
rneckelmann

May 08, 2005, 09:50 AM

True, true, true.

That's the way to go. Under all circumstances it won't be impossible to cheat, it's just a matter of making it reasonable difficult, without sacrifying too many resources on it.

 
Wernaeh

May 09, 2005, 07:05 AM

Ah ok, thanks for the explanation :D

On another thought, perhaps you could also compress your replay file afterwards. This might at least cut size down by one third or so.

Cheers,
- Wernaeh

 
This thread contains 15 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.