Hi Fidelity Visualisation and Simulation

                 
Chris Thorne
December 2007
dragonmagi at gmail.com

View Chris Thorne's profile on LinkedIn

Key words: Simulation, floating point, precision, spatial jitter, positional, rendering, physics, fidelity, truth, center, accuracy, 3D graphics, computation, scalability,  design, architecture, software, coordinate, origin.

Raising the fidelity bar

A common issue in modelling, visualisation, simulation, games and other applications is that the resolution of floating point numbers gets worse as the numbers become larger, causing scaling and accuracy problems. Technically, this is because the resolution is multiplied by 2 raised to the power of the floating point exponent, becoming more course with increasing exponent size. This aspect, though important, is but one part of a much larger set of related simulation fidelity concerns.

My PhD was part of my efforts to raise the world of modelling, visualisation and simulation to a new level of fidelity: improving the quality and fidelity of: motion, interaction and rendering; improving scalability, accuracy and the repeatability of physics.  These benefits do not necessarily have a net performance cost, as the approach can improve performance in some areas, a rare win-win situation. For brevity, hereafter I will collectively refer to modelling, visualisation and simulation as simulation.

Limited and inconsistent fidelity, quality and scalability: The truth is not as true as it could be

To take a high level view, when looking at a result from a computer we are making an observation: observing a simulation viewed from a particular perspective as video output, or observing the result of an equation as a printed numerical value. It would be nice to be assured that these observations are as accurate as they could be. However, often the results are not as accurate as the underlying computer system is capable of producing. This poorer fidelity occur in designs that reflect an inadequate understanding of fundamental properties of floating point positional cyber spaces, a scarcity of good guidance on what to do about problems such as spatial jitter and the lack of a holistic approach to minimising error throughout the simulation pipeline. In many cases such error goes unnoticed because there is no obvious visible sign of something wrong, but in some cases the error can be seen in the form of jittery motion and other effects described in the following section.

Symptoms: jitter, rendering artefacts, poor scalability and inconsistent physics

Jittery motion and rendering artefacts, such as tearing, z-fighting or flashing, are often observed when large scale spaces are employed, e.g. in a full scale model of the Earth. These effects occur when using large coordinate values, that is, at large distances from the origin. Many experiments were conducted to investigate these effects and develop an understanding of the underlying characteristics and causes of these problems. Descriptions and programs are included here to show some of these effects: motion spatial jitter, positional jitter break-up of models and z buffer order, rendering artefacts, and physics. The following describes the key characteristics and causes of these effects.

Causes

Execution environment: the precision funnel

Simulations execute in a pipeline of successive stages with progressively decreasing precision: a precision funnel. The hardware of the final graphical output stage has at most 32bit precision (circa 2007), or single precision, for coordinates and sometimes as little as 24 or 16 bit. Early stages of the pipeline are executed on a general purpose client CPU and possibly also on a server. The client and server CPUs afford higher precision, such as double precision.

The literature review revealed a common assumption that insufficient precision is the sole cause of jitter and related problems. Consequently, solutions were designed to address these problems by using higher precision variables. This approach yields little benefit as because the final stage hardware remains low precision.

An important part of the problem is a poorly understood or often overlooked property of simulation space: its nonuniform resolution.

Simulation space resolution is not uniform

Mona Lisa - example of sfumato Figure 1: The sfumato painting technique, used by Leonardo da Vinci on the Mona Lisa, increases in `smokiness' outwards from the center to give an appearance of curvature. Similarly, simulation exhibits a degradation in the quality of rendering as the viewpoint moves outwards from the origin. This is because the floating point coordinates of simulation space have a decreasing resolution of representable values with increasing distance from the origin, as shown in Figure 2 below.


decreasing resolution of floating point with distance Figure 2: The nonuniform resolution of floating point numbers. The decreasing resolution leads to a corresponding increase in error,  degrading quality outwards from the origin. The resolution induced error, which I call spatial error, can be a more significant cause of rendering artefacts than limited precision in large scale systems. In addition to affecting rendering, spatial error causes jittery motion and interaction and can introduce unacceptable error into physics and other calculations.

Time error

As with spatial position, if time values are represented in floating point are large then a simulation can show perceivable changes and inconsistent behaviour. The physics experiments show an example of this effect. In fact, computation using any positional floating point number will cause the types of positional error induced problems discussed here.

Relative spatial error and propagation

Relative error in calculation is traditionally determined as scalar values using the rules of numerical error math. E.g. e_{x+y} = e_x + e_y  and for multiplication:  e_{xy} = ye_x * xe_y.  However, such methods do not take into account the error inherent in the simulation space itself which is a geometric error calculated using Pythagoras' Theorem. Figure 3 below illustrates how coordinates are quantised to discrete locations in simulation space. A region of space showing two boxes delineating machine representable positions for coordinates (the nearest red spheres). The corners of both boxes are the only representable coordinate values for the real valued coordinates (x,y,z) and (x',y',z'). In a computer, (x,y,z) and (x',y',z') will be quantised to (or 'moved' to) one of the surrounding 8 representable coordinates that each can have (the nearest red spheres). The maximum positional error for point (x,y,z) is if it moves along the diagonal AC to either A or C. This error is therefore equal to half the distance between A and C and calculated by Pythagoras' Theorem.

Geometric property of relative spatial error

Figure 3: A region of space showing two boxes delineating machine representable positions for coordinates (red spheres). When estimating the maximum contribution of spatial error to error in a simulation, one must consider the relative spatial error between two points because a simulation involves more than one object's location. Even when there is just one object, the viewpoint also has a position and hence there are at least two locations. When each location jitters in opposite directions, this is a pathological case of diversion which increases the relative error. The largest error occurs when they diverge along the diagonals of the cubes, represented by half of the (AC+AB) distances.  Alternatively, a pathological conversion to point A could occur if each point was closer to the sphere common to both cubes. This is potentially worse as calculations that divide by the distance between (x,y,z) and (x',y',z') will produce a divide by 0 error. As distance from the origin increases, the gap between representable points increases and so also does the relative spatial error.

Relative spatial error in positional calculations, such as distance computation, can be incorporated into normal relative error estimation of algebraic equations using the usual rules of relative error calculation, as described above. The maximum relative error for a simulation propagates exponentially with the number of operations performed. This exponential propagation is must also be taken into account in any estimation of maximum simulation error.

Core solution to minimising simulation error: a continuous floating origin

"The engines don't move the ship at all!
The ship stays where it is and the 
engines move the universe around it!
"
Cubert Farnsworth, in A Clone of my Own, Futurama, Season 2, Episode 5

Since decreasing resolution with distance from the origin is a major cause of positional error and the fidelity of simulation space is highest around the origin, the solution begins with keeping the observation at the origin and moving objects in the other direction instead. This is called a continuous floating origin because the origin "floats" with the user as the user "navigates" a continuous space. One may say with this approach navigation does not move one through the virtual universe in much the same way the Planet Express engines do not move one through the Furturama universe. Yet in both cases one can navigate to all parts of the respective universes!

The floating origin method, and to a lesser extent, cruder approaches such as shifting the origin piecewise whenever a viewpoint is selected, lead to radical improvements in fidelity, rendering quality and scalability. This link shows a video of navigation from space to face, illustrating how the approach can greatly enhance scalability. The video is of a full scale planet earth modelled in one continuous single precision coordinate space. Without shifting the origin it would not be possible to zoom from space to face level fidelity.

As Cubert Farnsworth also found, one has to think about things differently to fully comprehend the solution: that is, to design a continuous high fidelity system think about moving objects around you, not moving through the environment, about resolution not precision and about algorithms that exploit efficiencies resulting form keeping the observer at the center of spacetime. The floating origin is the core technique from which suite of origin-centric techniques, process guidelines and a new simulation architecture are developed, all of which provide improvements to the fidelity and even performance of simulation.

Summary

"Men should not travel, Brutha.
At the center there is truth.
As you travel, so error creeps in."
Small Gods by Terry Pratchett

For simulation applications, computers are incapable of providing the exact truth, but an origin centric system will provide greater truth due to its greater accuracy. We can rephrase the advice to Brutha by saying, for computer simulation: there is greater truth at the center.

The thesis  presents techniques, analytical tools, architecture and process guidelines that holistically address the problem domain with a new origin-centric paradigm for positional computing. Along with the required change in thinking about the problem and solution, his new positional computing paradigm yields many benefits in the form of a continuous rendering space, increased accuracy, smoother motion, higher fidelity rendering and interaction, greater scalability and, in some cases, better performance.

References

A good reference that brings together many of these related issues is: HDI/HugesScenes
Other places about this subject are:
Floating Origin Unify Community
publications arising and the bibliography mirror.

Additional Information

Some additional information is also provided in this appendix.