Replication in networked games: Space/time consistency (Part 3)

Last time in this series, I talked about latency and consistency models.  I wanted to say more about the last of these, local perception filtering, but ended up running way past the word count I was shooting for. So I decided to split the post and turn that discussion into today’s topic.


Sharkey and Ryan justified local perception filters based on the limitations of human perception. In this post I will take a more physical approach derived from the causal theory of special relativity. For the sake of simplicity, we will restrict ourselves to games which meet the following criteria:

  1. The game exists within a flat space-time.
  2. All objects in the game are point-particles.
  3. All interactions are local.
  4. There is an upper bound on the speed of objects.

Many games meet these requirements. For example, in Realm of the Mad God, gameplay takes place in a flat 2D space. Objects like players or projectiles behave as point-masses moving about their center. Interactions only occur when two objects come within a finite distance of each other. And finally, no projectile, player or monster can move arbitrarily fast. The situation is pretty similar for most shooters and MMOs. But not all games fit this mold. As non-examples, consider any game with rigid body dynamics, non-local constraints or instant-hit weapons. While some of the results in this article may generalize to these games, doing so would require a more careful analysis.

Physical concepts

The basis for space-time consistency is the concept of causal precedence. But to say how this works, we need to recall some concepts from physics.

Let d>0 be the dimension of space and let \mathbb{R}^{d,1} denote a d+1 dimensional real vector space. In the standard basis we can split any vector (x,t) \in \mathbb{R}^{d,1} into a spatial component x \in \mathbb{R}^d and a time coordinate t \in \mathbb{R}. Now let c be the maximum velocity any object can travel (aka the speed of light in the game). Then define an inner product on \mathbb{R}^{d,1},

\langle (x,t), (y,s) \rangle = \langle x,y \rangle - c^2 t s

Using this bilinear form, vectors (x,t) \in \mathbb{R}^{d,1} can be classified into 3 categories:

  1. Time-like vectors: \langle (x,t), (x,t) \rangle < 0
  2. Null vectors: \langle (x,t), (x,t) \rangle = 0
  3. Space-like vectors: \langle (x,t), (x,t) \rangle > 0

As a special consideration, we will say that a vector is causal if it is either time-like or null. We will suppose that time proceeds in the direction (0, +1). In this convention, causal vectors (x,t) \in \mathbb{R}^{d,1} are further classified into 3 subtypes:

  1. Future-directed: t > 0
  2. Zero: t = 0
  3. Past-directed: < 0

In the same way that a Euclidean is constructed from a normed vector space, space-time \mathbb{E}^{d,1} is a psuedo-Euclidean space associated to the index-1 vector space \mathbb{R}^{d,1} and its inner product. We will assume that every event (eg collision, player input, etc.) is associated to a unique point in space-time and when it is unambiguous, we will identify the space-time point of the event with itself. Objects in the game are points and their motions sweep out world lines (or trajectories) in space-time.

The relationship between these concepts can be visualized using a Minkowski diagram:

A Minkowski diagram showing the relationship between time-like, space-like and null vectors. Image taken from Roger Horsely's lecture notes on special relativity.
A Minkowski diagram showing the relationship between time-like, space-like and null vectors. Image taken from Roger Horsely‘s lecture notes on classical mechanics. (c) Roger Horsley 2011-2012.

If you prefer a more interactive environment, Kristian Evensen made a browser based Minkowski diagram demo:

K. Evensen. (2009) “An interactive Minkowski diagram

Space-time consistency

With the above physical language, we are now ready to define the space-time consistency as it applies to video games.

Given a pair of events, p, q \in \mathbb{E}^{d,1} we say that p causally precedes q (written p \preceq q) if p-q is future-directed causal or zero.

Causal precedence is a partial order on the points of space-time, and it was observed by Zeeman to uniquely determine the topology and symmetry of space-time. (Note: These results were later extended by Stephen Hawking et al. to an arbitrary curved space-time, though based on our first assumption we will not need to consider such generalizations.) Causal precedence determines the following consistency model:

An ordered sequence of events p_0, p_1, ... p_n\in \mathbb{E}^{d,1} is space-time consistent if for all 0 \leq i, j \leq n,  p_i \preceq p_j \implies i \leq j.

Space-time consistency is a special case of causal consistency. Relativistic causal precedence is stricter than causality consistency, because it does not account for game specific constraints on interactions between objects. For example, a special effect might not influence any game objects, yet in a relativistic sense it causally precedes all events within its future light cone. Still space-time consistency is more flexible than strict temporal consistency, and as we shall see this can be exploited to reduce latency.

Cone of uncertainty

As a first application of space-time consistency, we derive a set of sufficient conditions for dead-reckoning to correctly predict a remote event. The basis for this analysis is the geometric concept of a light cone, which we now define:

Any closed regular set S \subseteq \mathbb{E}^{d,1} determines a pair of closed regular sets called its causal future and causal past, respectively:

  • Causal future: J^+ (S)= \left \{ p \in \mathbb{E}^{d,1} : \left ( \exists q \in S : q \preceq p \right ) \right \}
  • Causal pastJ^- (S)= \left \{ p \in \mathbb{E}^{d,1} : \left ( \exists q \in S : p \preceq q \right ) \right \}

According to our assumptions about the direction of time, if an event had any causal influence on an event in S, then it must be contained in J^-(S). Conversely, events in S can only influence future events in J^+(S)When the set S is a singleton, then J^+(S) is called the future light cone of S, and J^-(S) is the past light cone, and the set J^+(S) \cup J^-(S) is the light cone of S.

The causal future and past are idempotent operators,

J^+ (J^+(S)) = J^+ (S),

J^- (J^-(S)) = J^- (S).

Sets which are closed under causal past/future are called closed causal sets. If S is a closed causal set, then so is its regularized complement,

J^- \left ( \overline{J^+ (S)^c } \right ) = \overline{ J^+ (S)^c },

J^+ \left ( \overline{J^- (S)^c } \right ) = \overline{ J^- (S)^c }.

Now, to connect this back to dead-reckoning, let us suppose that there are two types of objects within the game:

  1. Active entities: Whose motion is controlled by non-deterministic inputs from a remote machine
  2. Passive entities: Whose trajectory is determined completely by its interactions with other entities.

For every active entity, we track an event r_i = (x_i, t_i) which is the point in space-time at which it most recently received an input from its remote controller. Let R = \{ r_0, r_1, ... \} be the set of all these events. We call the causal future J^+(R) the cone of uncertaintyEvents outside the cone of uncertainty are causally determined by past observations, since as we stated above J^-( \overline{J^+(R)^C} ) = \overline{J^+(R)^C}. Consequently, these events can be predicted by dead-reckoning. On the other hand, events inside J^+(R) could possibly be affected by the actions of remote players and therefore they cannot be rendered without using some type of optimistic prediction.

Here is a Minkowski diagram showing the how the cone of uncertainty evolves in a networked game as new remote events are processed:

The cone of uncertainty illustrated. In this Minkowski diagram, the vertical axis represents time. The trajectories for each player are coded according to their colors. The red player is local while the blue and magenta are remote. The grey region is the cone of uncertainty.
The cone of uncertainty illustrated. In this Minkowski diagram, the vertical axis represents time. The world lines for each active object are drawn as colored paths. The grey region represents the cone of uncertainty, and is updated as new remote events are processed.

Cauchy surfaces

In this section we consider the problem of turning a (d+1)-dimensional collection of world lines in space-time into a d-dimensional picture of the state of the world. The geometry of this process is encoded by a Cauchy surface. Intuitively, a Cauchy surface captures an instant in space-time as it is perceived by some observer. The rendered state of objects are determined by the points at which their world lines intersect this surface. That these intersections are well-defined motivates the following definition:

A hypersurface S \subseteq \mathbb{E}^{d,1} is a Cauchy surface if every time-like curve extends to one which intersects S exactly once.

This is not the only way to define a Cauchy surface. Equivalently,

Proposition 1: Any Cauchy surface S partitions a flat space-time into 3 regions:

  • The interior of the causal future: \text{int}(J^+(S))
  • The interior of the causal past: \text{int}(J^-(S))
  • And S itself

If S is a maximal set with these properties, then S is a Cauchy surface.

In a flat space-time, Cauchy surfaces can be parameterized by spatial coordinates. Let S be a Cauchy surface and \phi_S : \mathbb{E}^d \to \mathbb{E},

\phi_S(x) = \min_{ (x,t) \in S } t.


S = \{ (x, \phi_S(x)) \in \mathbb{E}^{d,1} : x \in \mathbb{E}^d \}.

The inverse question of when a function \phi : \mathbb{E}^d \to \mathbb{E} determines a Cauchy surface is answered by the following theorem:

Theorem 1: A function \phi : \mathbb{E}^d \to \mathbb{E} determines a Cauchy surface S = \{ (x, \phi(x) ) : x \in \mathbb{E}^d \} if and only if the subgradient of \phi is bounded by \frac{1}{c}.

Proof: To show that this condition is sufficient, it is enough to prove that any time parameterized time-like curve x : \mathbb{E} \to \mathbb{E}^d intersects the surface only once. To show that the curve crosses at least once, recall proposition 1 implies that \phi partitions \mathbb{E}^{d,1} into 3 regions, \{ (x, t) : t < \phi(x) \}, \{ (x, \phi(x) \} and \{ (x,t) : \phi(x) < t \}, and that the curve begins in the causal past of S and ends in the causal future, so by the intermediate value theorem the must cross S. Now let (x,t) be a point of intersection between the curve and the surface.  Then the entire future of the curve is contained in the open cone \text{int} (J^+( \{ (x,t) \} )). Similarly, because \nabla_{\dot{x}(t)} \phi (x(t)) < \frac{1}{c}, no other points on S intersect \text{int}( J^+ ( \{ (x,t) \}) ). Ditto for all the points in the causal past, and so any intersection must be unique. To show necessity, take any Cauchy surface S and construct the field \phi_S as shown above. Suppose that there is some point x \in \mathbb{E}^d and unit vector v \in \mathbb{R}^d where |\nabla_{v} \phi_S(x)| > \frac{1}{c}. Without loss of generality, assume \phi_S(x) = 0. Then there exists some \epsilon > \delta > 0 where \phi_S (x + \epsilon v) > \frac{\epsilon}{c - \delta}. Construct the time-like curve q(t) = \left \{ \begin{array}{cc} x & \text{if } t < \delta \\ x + (c - \delta) v t & \text{otherwise} \\ \end{array} \right.. By the intermediate value theorem, q crosses S in two locations and so S is not Cauchy. QED

Consistency revisited

Rendering a game amounts to selecting a Cauchy surface and intersecting it with the world lines of all objects on the screen. The spatial coordinates of these intersections are then used to draw objects to the screen. From this perspective one can interpret a consistency model as determining a Cauchy surface. We now apply this insight to the three consistency models which were discussed last time.

Starting with strict consistency, define t_{\min} = \min_{(x_i, t_i) \in R} t_i be the time of the oldest-most recent input from a remote player.  Then define a constant Cauchy surface,

\phi_{\text{strict}}(x) = t_{\min} .

As we did with the cone of uncertainty, we can visualize the evolution of this Cauchy surface with a Minkowski diagram:

A visualization of the Cauchy surface for strict consistency.  The orange line represents the Cauchy surface viewed by the local (red) player. Note that this is some time behind the actual position of the red player.
A visualization of the Cauchy surface for strict consistency. The orange line represents the Cauchy surface viewed by the local (red) player. Note that this is some time behind the most recently acknowledged input.

The same analysis applies to optimistic consistency. Let t_{\text{current}} be the time that the most recent local input was processed. Then the optimistic Cauchy surface is,

\phi_{\text{optimistic}}(x) = t_{\text{current}} .

Which gives the following Minkowski diagram:

An optimistic Cauchy surface (shown in orange). The surface tracks the local red player closely, giving more responsive inputs. Unfortunately, the optimistic Cauchy surface intersects the cone of uncertainty and so it requires local prediction to extrapolate the position of remote objects.
An optimistic Cauchy surface (shown in orange). The surface tracks the local red player closely, giving more responsive inputs. Unfortunately, the optimistic Cauchy surface intersects the cone of uncertainty and so it requires local prediction to extrapolate the position of remote objects.

Unlike in the case of the strict consistency, the optimistic causal surface intersects the cone of uncertainty. As a result, it requires prediction to extrapolate the state of remote entities.

Finally, here is a Minkowski diagram showing the Cauchy surface of a local perception filter:

The Cauchy surface for a local perception filter. Note that this surface closely follows the local player, yet does not intersect the cone of uncertainty.
The Cauchy surface for a local perception filter. Note that this surface closely follows the local player, yet does not intersect the cone of uncertainty.

Time dilation

Local perception filters make finer geometric tradeoffs between local responsiveness and visual consistency. They achieve this by using a curved Cauchy surface. This has a number of advantages in terms of improving responsiveness, but introduces time dilation as a side effect. In a game, this time dilation will be perceived as a virtual acceleration applied to remote objects. To explain this concept, we need to make a distinction between local time and coordinate time. Coordinate time, t, is the time component of the coordinates of an event in Minkowski space-time. Local time, \tau, is the time that an event is rendered for a local user.

Now suppose that we have a Cauchy surface, \phi(x,\tau) varying as function of local time. We want to compute the time dilation experienced by an object with a coordinate time parameterized world line, q : \mathbb{E} \to \mathbb{E}^d. In this situation, define the mapping \hat{t} : \mathbb{E} \to \mathbb{E}, that relates local time to the particle’s coordinate time,

\hat{t}(\tau) = \phi(q( \hat{t}(\tau) ), \tau).

The time dilation observed in the particle is the ratio of change in coordinate time to change in local time, or by the chain rule,

\frac{d \hat{t}}{d \tau} = \dot{\phi} ( q, \tau) + \nabla \phi (q, \tau) \cdot \dot{q}(\hat{t}).

In general, we would like $\frac{d \hat{t}}{d \tau}$ to be as close to 1 as possible. If the time dilation factor ever becomes \leq 0, then objects will stop or possibly move backwards in time. This condition produces jitter, and it is important that we avoid it in all circumstances. Fortunately, it is not hard to derive a set of sufficient conditions to ensure that this is the case for all time-like paths:

Theorem 2: If \dot{\phi} > 0 and | \nabla \phi | < \frac{1}{c}, then for any time-like particle the time dilation is strictly positive.

The first condition is natural, since we can assume that the Cauchy surface is strictly moving forward in time.  Similarly, according to Theorem 1 the second condition is the same as the requirement that \phi determines a Cauchy surface. If these conditions are violated, then it is possible for objects to jitter or even reverse direction. This is directly analogous to the situation in physics, where if an object travels faster than light it can move backwards in time. The easy fix in a game is to just modify the rules to ensure that this does not ever happen.

Intersecting world lines

Another tricky issue is finding the intersection of the world lines with the Cauchy surface. For special choices of world lines and Cauchy surfaces, it is possible to compute these intersections in closed form. For an example, in the original local perception filter paper Sharkey and Ryan carry out this analysis under the assumption that the world line for each particle is a polynomial curve. In general though it is necessary to use a numerical method to solve for the local time of each object. Fortunately, the requirements of space-time consistency ensure that this is not very expensive. Given a world line q as before, we observe the following monotonicity properties:

Theorem 3: For any Cauchy surface \phi and time-like curve q:

  • If \hat{t} > \phi( q(\hat{t}), \tau), then for all u > \hat{t}, u > \phi(q(u), \tau).
  • If \hat{t} < \phi( q(\hat{t}), \tau), then for all u < \hat{t}, u < \phi(q(u), \tau)
  • There is only one \hat{t} where \hat{t} = \phi(q(\hat{t}), \tau)

As a result, we can use bisection to find \hat{t} to O(n) bits of precision in O(n) queries of the Cauchy surface. In pseudocode, we have the following algorithm:

findWorldLineIntersection(phi, q, t0, t1, n):
   for i=0 to n:
      t = (t0 + t1) / 2
      if t > phi(q(t)):
          t1 = t
          t0 = t
   return t0

This is exponential order convergence, and is about as fast as one can hope to achieve. Here phi is a function encoding the Cauchy surface as described above, q is coordinate time parameterized curve representing the world line of the particle, t0 and t1 are upper and lower bounds on the intersection region and n is the number of iterations to perform. Higher values of n will give more accurate results, and depending on the initial choice of t0 and t1, for floating point precision no more than 20 or so iterations should be necessary.

Limitations of local perception filters

While local perception filters give faster response times than strict surfaces, they do not always achieve as low a latency as is possible in optimistic prediction. The reason for this is that we do not allow the local perception filter to cross into the cone of uncertainty. If the local player passes into this region, then they will necessarily experience input lag.

We can quantify the situations where this happens. As a first observation, we note that the boundary of the cone of uncertainty is a Cauchy surface and so it can be described parametrically. That is define a function h : \mathbb{E}^d \to \mathbb{E} where,

h(x) = \min_{(x_i, t_i) \in R} \frac{|x - x_i|}{c}+t_i.

We will call h the horizon. If the last locally processed input was at (x_0, t_0) \in \mathbb{E}^{d,1}, then the minimum input delay of the local perception filter is,

\Delta = t_0 - h(x_0).

The magnitude of \Delta is a function of the last acknowledged position and ping of all remote players. If \Delta = 0, then we can respond to local inputs immediately. This leads to the following theorem which gives necessary conditions for a local input to be processed immediately:

Theorem 4: If the round trip ping of each remote player is \Delta_i and their last acknowledged position is x_i, then in order for a local perception filter to process a local input without lag, we must require that:

c \Delta_i < |x_i - x_0|

Under this interpretation, each remote player sweeps out a sphere of influence,

S_i = \left \{ x \in \mathbb{E}^d : c \Delta_i \leq |x - x_i| \right \}

And if the local player passes into a sphere of influence they will experience lag which is proportional to their distance to the remote player. The size of these spheres of influence is determined by the ping of each remote player and the speed of light. As a result, players with a higher ping will inflict more lag over a larger region of space. Similarly, increasing the speed of light expands the spheres of influence for all players and can thus cause the local player to experience more input latency. So, the moral of this theorem is that if a game has a lot of close quarters combat or fast moving projectiles, then local perception filters might not be much help. On the other hand, if you can ensure that players are separated by at least c \Delta distance from each other, then local perception filters completely eliminate input lag.

Software engineering issues

In general, it can be difficult to bolt networked replication on top of an existing game. The same is true of local perception filters. At minimum, a game engine must have the following features:

  • Decoupled rendering
  • Deterministic updates
  • Persistence

Each of these capabilities requires architectural modifications. These changes are easy to implement if incorporated early in the design process, and so if networking is a priority then it pays to deal with them up front.

Decoupled rendering

Though this is standard practice today, it is especially important in a network game that the updates are not tied to rendered frames. Not only does this allow for fast and fluid animations regardless of the update rate, but it also makes games easier to debug by making time rate dependent behaviors deterministic. Glenn Fiedler popularized this idea in his writing on networked games:

G. Fiedler. (2006) “Fix your time step!

That article covers the basics pretty well, though it glosses over the somewhat some subtle issues like input handling. In a fixed time step game it is more natural to use polling to handle inputs instead of asynchronous interrupts (ie events). In an environment like the DOM which defaults to the latter, it is necessary to enqueue events and process them within the main loop.

In some applications, like virtual reality, it may be important to respond to certain user inputs immediately. For example, the current position of the user’s head must be taken into account before rendering the latest frame or else the player will experience motion sickness. In this situation it is important that these inputs have a limited effect on the state of the game, or that their effects on the state can processed at a lower latency.

Adding local perception filters does not change the update step, but it does modify how the game is rendered.  In psuedocode, this is what the drawFrame procedure might look like:

    phi = constructLocalPerceptionFilter(localTime)
    x[] = rendered positions of all objects
    for each object i:
        x[i] = q[i](findWorldLineIntersection(phi,q[i],t0,t1,20))

Deterministic updates

In order for a server to handle inputs from remote clients in any order, it is necessary that game updates are deterministic. That is, we require that given some list of events and a state, there is a function next that completely determines the successor state:

\text{update} : \text{state} \times \text{events} \to \text{state}.

Deterministic updates also simplify testing a game, since it is possible to record and play back a sequence of events exactly. The cost though is that all system state including the value of each random number generator must be passed as input to the update function.


Lastly, local perception filters need to maintain the history of the game. This representation should support efficient queries of the world lines of each object at arbitrary points in coordinate time and branching updates (since remote events can be processed out of order). As a sketch of how this can be implemented, refer to the following article by Forrest Smith:

F. Smith. (2013) “The Tech of Planetary Annihilation: ChronoCam

At a high level, the ChronoCam system used by Planetary Annihilation gives a streaming persistent history all world lines in the game. In addition to being a necessary component in realizing local perception filters, Forrest Smith observes that maintaining a persistent history gives the following incidental benefits:

  • Robust demo recordings – Since only positions are saved, demos are all upwards compatible and seeking/playback is very efficient
  • Bandwidth savings – Fewer position updates can be sent to compensate for low capacity connections and paths can be simplified to save space
  • Cheating prevention – The state of hidden units does not need to be replicated

The general problem of storing and maintaining the history of a system is known in data structures as “persistence“.  In general, any data structure in the pointer-machine model with bounded in-degree can be made into a persistent version of the same data structure with only O(1) overhead. This transformation is described in the following paper:

J.R. Driscoll, N. Sarnak, D.D. Sleator, R.E. Tarjan. (1989) “Making data structures persistent” JCSS

While the DSST transformation is automatic, in practice it requires a certain amount of human finesse to apply correctly. One of the main sticking points is that bounded in-degree rules out certain classes of objects like iterators. Still, it is a useful idea and understanding it gives the basic tools necessary to implement persistent data structures.

Functional programming

Finally, I will conclude this section by observing that we can get all of the above features automatically if we use functional programming. For example, “deterministic” in the sense we are using it is just another word for functional purity. Similarly, data structure persistence is a weaker requirement than total immutability. Thus it stands to reason that if we took the more aggressive position and functionalized everything that we would get all the above features completely by accident. While I’m not sure how practical it is to write a game this way, it seems plausible that if functional programming becomes more popular, then local perception filters may see wider use within the game programming community.

A simplified shooter

To demonstrate these principles I made simple interactive demonstration. You can try it out here:


One thing which I did not cover in detail was how to choose the Cauchy surface for the local perception filter in the first place. The main reason I didn’t bring this up was that the word count on this post had already spiraled out of control, and so I had to cut some material just to get it finished. I may revisit this topic in a later post and compare some different options.

In the demo, the local perception filter is implemented using the following function of the horizon:

\phi(x) = \min(t_0, h(x))

Where t_0 is the time of the local player. Some other options would be to use a smoother surface or some offset of the true horizon. At any rate, this surface seems to give acceptable results but I fully expect that it could be improved drastically with more careful consideration.


This post elaborates on the ideas sketched out by Sharkey and Ryan. In particular, we handle the problem of intersecting world lines with the Cauchy surface in general and give precise conditions on the geometry of the Cauchy surface for it to be jitter free. I think that this is the first time that anyone has proposed using a binary search to find the coordinate time for each world line intersection, though it is kind of an obvious idea in hindsight so I would not be surprised if it is already known. Additionally the precise interpretation of special relativity as applied to video games in this post is new, though the conceptual origin is again in Sharkey and Ryan’s paper.

In closing, local perception filters are a promising approach to latency hiding in networked games. Though they cannot eliminate lag in all cases, in games where most interactions are mediated by projectiles they can drastically reduce it. Understanding space-time consistency and local perception filtering is also helpful in general, and gives insight into the problems in networked games.

Next time

In the next post I want to move past latency and analyze the problem of bandwidth. It will probably be less mathematical than this post.

Replication in networked games: Latency (Part 2)

The last post in this series surveyed replication in network games at a high level. In this article and the next, I want to go deeper into the issues surrounding replication. One of the most annoying aspects of online gaming is latency. Latency, or lag, is the amount of time between when a user pushes a button and when the state of the game updates. To quantify the effects of lag, we refer to the following experiment by Pantel and Wolf:

L. Pantel, L. Wolf. (2002) “On the impact of delay in real-time multiplayer games” NOSSDAV 2002

In that experiment, they measured the performance of players in a racing game with varying input delays. Amongst other things they conclude that,

  • Latency is negatively correlated with performance and subjective immersion
  • Input latency above 500ms is not acceptable
  • Below 50ms the effects of latency are imperceptible

And these conclusions are validated by other experiments. So given that latency is bad, we come to the topic of this post which is:

How do we get rid of lag?

Fighting lag

At first, eliminating lag may seem impossible since there is no way for a client to know what some remote player did until their input has completed a round trip over the network. Underlying this is the requirement that all events occur sequentially within a single shared frame of reference.

If we want to beat the round-trip-time limit, then the only solution is to give each player their own local frame of reference. Consequently different players will perceive the same events happening at different times. From a physical perspective this is intuitive. After all, special relativity tells us that this is how nature works, and spatially distributed systems like networked games must obey similar constraints. For example, imagine we have two players and they both shoot a particle.  It could be that player 1 observes their own particle as emitted before player 2’s and vice-versa:

Red player shoots first
Scenario 1: Red player shoots first
Blue player shoots first
Scenario 2: Blue player shoots first

Because the players are not directly interacting, either scenario represents a plausible sequence of events. At the same time though, some events can not be swapped without directly violating the rules of the game.  For example, consider a player opening a door and walking through it; if played in the opposite order, the player would appear to clip through a solid wall:

A plausible ordering of events, a player opens a door and walks through it
A plausible ordering of events: a player opens a door and then walks through it
An implausible ordering of events; a player walks through a door and then opens it.
An implausible ordering of events: a player walks through a door and then opens it.

While it might not matter in the first scenario who shot first, in the second situation we definitely do not want players ghosting around through doors. It seems like it should be easy to tell the difference, and so we ask the following innocuous sounding question:

How do we know if a sequence of events is consistent?

Though this question may sound trivial, in reality it is at the heart of building a concurrent distributed system. In fact, there are multiple answers and the key to eliminating lag lies in subtle details of how one chooses to define consistency.

Consistency models for games

A consistency model determines the visibility and apparent ordering of events in a distributed system. Defining a useful consistency model requires a delicate balance between parallelism and interaction. Consistency requirements can be informed by game design and vice versa.  For example, certain features like instant hit weapons or rigid body constraints may be harder to implement in some consistency models than others.

While many consistency models have been invented by distributed systems engineers, in the context of games the two most popular are strict consistency and optimistic consistency (aka client-side prediction). These two methods represent the extremes of the tradeoff space, with the first giving perfect global synchronization for all players and the latter giving the minimum possible latency. What is less well-known (and also what I want to focus on in the next article) are the large and widely underutilized collection of ideas that live somewhere in the middle.

Strict consistency

Strict consistency requires that all clients see all events in the same order. It is both the most constrained consistency model and also the easiest to understand. Unfortunately, it also causes lag.  This consequence is closely related to the famous CAP theorem, where we recall that the in CAP stands for Consistency in the strong sense (and more specifically linearizability). If we want to beat lag, then this has to be relaxed.

Optimistic consistency

The opposite extreme is to basically toss all consistency requirements out the window and let every client update their model however and whenever they want. This approach is charitably called optimistic consistency in the distributed systems community or in the game development business client-side prediction or dead-reckoning (depending on who you talk to). There are plenty of articles and technical documents describing how client-side prediction is used in games, though the one I most recommend is Gabriel Gambetta’s tutorial:

G. Gambetta. (2014) “Fast paced multiplayer

The obvious problem with optimistic consistency is that local replicas diverge. To mitigate these consequences, optimistic replication schemes must implement some form of correction or reconciliation. A general strategy is the undo/redo method, which rewinds the state to the point that a particular event was executed and then replays all subsequent events. The problem with undo/redo is that some operations may conflict. To illustrate this, consider the following scenario as viewed by two different players (red and blue). Suppose that the red player fires a laser at the blue player, who tries to dodge behind a barrier.  Then it is possible with optimistic consistency for both players to see different views of events as follows:

The red player shoots at the blue player and sees the shot hit before the blue player can get to cover.
Red player’s perspective: The red player shoots at the blue player and sees the shot hit before the blue player can get to cover.
The blue player runs for cover and sees the red player shoot and miss
Blue player’s perspective: The blue player runs for cover and sees the red player shoot and miss


In optimistic consistency, there is no systematically prescribed way to determine which sequence of events is correct. It is up to the programmer to add extra logic to handle all the possibilities on a case-by-case basis. In effect, this is equivalent to defining a weak consistency model implicitly. But because optimistic consistency does not start from any principled assumptions, teasing out an acceptable conflict resolution scheme is more art than science. External factors like game balance greatly inform what sort of actions should have priority in a given scenario.

But the problems with optimistic consistency do not stop at conflict resolution. Unlike databases which operate on discrete objects, games need to present a spatially and temporally continuous environment. Directly applying corrections to the state of the game causes remote objects to teleport. To illustrate this effect, consider a scenario where the blue player watches the red player walk through a serpentine corridor. From the red player’s perspective, the world looks like this:

The red player walks through a maze with no lag.
The red player walks through a maze with no lag.

However, if the blue player is using optimistic consistency and resolving remote corrections by directly updating the state, then the rendered view of the red player’s trajectory will skip around:

Directly applying corrections to the state causes visible discontinuities in the motion of remote objects
Directly applying corrections to the state causes visible discontinuities in the motion of remote objects. Here the blue player observes the red players motion delayed by 4 frames.

A commonly proposed solution is to smooth out the discontinuities using interpolation (for example exponential damping). Here is the same trajectory rendered with a damping ratio of 2.5:

The remote player's path is smoothed using damping to remove discontinuities.
The remote player’s path is smoothed using a damping ratio of 2.5 to remove discontinuities.

Other popular interpolation strategies include using splines or path planning to hide errors. Still, interpolation (like conflict resolution) is limited with respect to the latencies and inconsistencies it can hide.  Under a larger delay, damping can cause glitches like players sliding through walls and solid objects:

The blue player observes the red player with a lag of 8 frames using the same damping ratio. Larger deviations in the red player's trajectory are noticeable.
The blue player observes the red player with a lag of 8 frames using the same damping ratio. Larger deviations in the red player’s trajectory are noticeable.

While it is impossible to eliminate correction errors, their severity can be reduced with great programmer effort and conscientious design choices. Choosing optimistic consistency as a networking model also increases the engineering costs of a game engine. At minimum, an optimistically replicated game must maintain at least 3 different subsystems relating to the game logic:

  1. First, there is the core game logic which describes how events translate to state updates.
  2. Second, there is the conflict resolution code which determines how remote updates are handled.
  3. Finally there is the interpolation logic, which must seamlessly hide state corrections from the player.

Each of these systems interact with one another and changes in one must be propagated to the others. This increased coupling slows down testing and makes modifications to the game logic harder.

Local perception filters

Fortunately, there is a third option that is both faster than strict consistency and simpler than optimism. That this is possible in the first place should not be too surprising. For example, causal consistency gives faster updates than strict consistency while maintaining most of its benefits. However, causal consistency – like most models of consistency in distributed systems – applies to discrete objects, not geometric structures. To apply causal consistency to games, we need to incorporate space itself. One of the pioneering works in this area is Sharkey, Ryan and Robert’s local perception filters:

P.M. Sharkey, M.D. Ryan, D.J. Roberts. (1998) “A local perception filter for distributed virtual environments” Virtual Reality Annual International Symposium

Local perception filters hide latency by rendering remote objects at an earlier point in time. This time dilation effect spreads out from remote players to all nearby objects. To illustrate this effect, consider a situation again with two players (red and blue) where red is shooting at blue. In this case the red player sees the shot slows down as it approaches the remote player:

Red sees the bullet slow down as it approaches blue before speeding up as it passes.
Red sees the bullet slow down as it approaches blue before speeding up as it passes.

Meanwhile the remote player sees the bullet shot at a higher than normal velocity and then decelerate to a normal speed:

The blue player sees the bullet shoot quickly decelerate.
The blue player sees the bullet decelerate

An important property of local perception filters is that they preserve causality, assuming that all interactions are local and that no object travels faster than some fixed velocity c. As a result, they technically satisfy the requirements of causal consistency. This means that there is no need to implement any special correction or client-side prediction models: local perception filters just work.

However, they are not without their own draw backs. The locality and speed limit rules out “railgun” like weapons or other instant-hit effects. More subtly, the propagation of rigid constraints violates the c speed limit, and so rigid body dynamics is out too. Finally, while local perception filters can help reduce lag, they do not eliminate it. Standing next to an extremely laggy remote player will slow down your inputs substantially. Some discussion of these consequences can be found in the follow up paper by Ryan and Sharkey:

M.D. Ryan, P.M. Sharkey. (1998) “Distortion in distributed virtual environments” Proceedings of Virtual Worlds

Also, unlike prediction based consistency, local perception filters make it easy to implement some fun special effects in multiplayer games. Some interesting examples include Matrix-style bullet time and Prince of Persia’s instant rewind. Of course it is questionable how practical/fair these effects would be since they necessarily inflict a substantial amount of lag on all player besides the one using rewind/bullet time.

Finally, it is also worth pointing out that the concept of using local time dilation to hide latency appears to have been independently discovered several times.  For example,

X. Qin. (2002) “Delayed consistency model for distributed interactive systems with real-time continuous media” Journal of Software


In this article we surveyed three different techniques for dealing with latency in networked games, though our review was by no means exhaustive. Also some of these methods are not mutually exclusive. For example, it is possible to combine optimistic replication with local perception filters to offset some of the drawbacks of a purely optimistic approach.

In the end though, selecting a consistency model is about making the right trade-offs. In some situations, maybe the strict input latency demands justify the complexity of and glitches that come with optimistic replication. In other situations where most interactions are long range, perhaps local perception filters are more appropriate. And for slower paced games where latency is not a great concern strict consistency may be sufficient and also easier to implement.

Next time

In the next article, we will talk more about space/time consistency. I’ll also present a more rigorous formalization of local perception filters as a consistency model and prove a few theorems about networking for games. Finally, I’ll write about how to implement local perception filters in a video game.

Replication in networked games: Overview (Part 1)

It has been a while since I’ve written a post, mostly because I had to work on my thesis proposal for the last few months.  Now that is done and I have a bit of breathing room I can write about one of the problems that has been bouncing around in my head for awhile, which is how to implement browser based networked multiplayer games.

I want to write about this subject because it seems very reasonable that JavaScript based multiplayer browser games will become a very big deal in the near future.  Now that most browsers support WebWorkersWebGL and WebAudio, it is possible to build efficient games in JavaScript with graphical performance comparable to native applications.  And with of WebSockets and WebRTC it is possible to get fast realtime networked communication between multiple users.  And finally with  node.js, it is possible to run a persistent distributed server for your game while keeping everything in the same programming language.

Still, despite the fact that all of the big pieces of infrastructure are finally in place, there aren’t yet a lot of success stories in the multiplayer HTML 5 space.  Part of the problem is that having all the raw pieces isn’t quite enough by itself, and there is still a lot of low level engineering work necessary to make them all fit together easily.  But even more broadly, networked games are very difficult to implement and there are not many popular articles or tools to help with this process of creating them.  My goal in writing this series of posts is to help correct this situation.  Eventually, I will go into more detail relating to client-server game replication but first I want to try to define the scope of the problem and survey some general approaches.

Overview of networked games

Creating a networked multiplayer game is a much harder task than writing a single player or a hot-seat multiplayer game.  In essence, multiplayer networked games are distributed systems, and almost everything about distributed computing is more difficult and painful than working in a single computer (though maybe it doesn’t have to be).  Deployment, administration, debugging, and testing are all substantially complicated when done across a network, making the basic workflow more complex and laborious.  There are also conceptually new sorts of problems which are unique to distributed systems, like security and replication, which one never encounters in the single computer world.


One thing which I deliberately want to avoid discussing in this post is the choice of networking library.  It seems that many posts on game networking become mired in details like hole punching, choosing between TCP vs UDP, etc.  On the one hand these issues are crucially important, in the same way that the programming language you choose affects your productivity and the performance of your code.  But on the other hand, the nature of these abstractions is that they only shift the constants involved without changing the underlying problem.  For example, selecting UDP over TCP at best gives a constant factor improvement in latency (assuming constant network parameters). In a similar vein, the C programming language gives better realtime performance than a garbage collected language at the expense of forcing the programmer to explicitly free all used memory. However whether one chooses to work in C or Java or use UDP instead of TCP, the problems that need to be solved are essentially the same. So to avoid getting bogged down we won’t worry about the particulars of the communication layer, leaving that choice up to the reader.  Instead, we will model the performance of our communication channels abstractly in terms of bandwidthlatency and the network topology of the collective system.

Administration and security

Similarly, I am not going to spend much time in this series talking about security. Unlike the choice of communication library though, security is much less easily written off.  So I will say a few words about it before moving on.  In the context of games, the main security concern is to prevent cheating.  At a high level, there are three ways players cheat in a networked game:

Preventing exploits is generally as “simple” as not writing any bugs.  Beyond generally applying good software development practices, there is really no way to completely rule them out.  While exploits tend to be fairly rare, they can have devastating consequences in persistent online games.  So it is often critical to support good development practices with monitoring systems allowing human administrators to identify and stop exploits before they can cause major damage.

Information leakage on the other hand is a more difficult problem to solve.  The impact of information leakage largely depends on the nature of the game and the type of data which is being leaked.  In many cases, exposing positions of occluded objects may not matter a whole lot.  On the other hand, in a real time strategy game revealing the positions and types of hidden units could jeopardize the fairness of the game.  In general, the main strategy for dealing with information leakage is to minimize the amount of state which is replicated to each client.  This is nice as a goal, since it has the added benefit of improving performance (as we shall discuss later), but it may not always be practical.

Finally, preventing automation is the hardest security problem of all.  For totally automated systems, one can use techniques like CAPTCHAs or human administration to try to discover which players are actually robots.  However players which use partial automation/augmentation (like aimbots) remain extremely difficult to detect.  In this situation, the only real technological option is to force users to install anti-cheating measures like DRM/spyware and audit the state of their computer for cheat programs. Unfortunately, these measures are highly intrusive and unpopular amongst users, and because they ultimately must be run on the user’s machine they are vulnerable to tampering and thus have dubious effectiveness.


Now that we’ve established a boundary by defining what this series is not about it, we can move on to saying what it is actually about: namely replication. The goal of replication is to ensure that all of the players in the game have a consistent model of the game state. Replication is the absolute minimum problem which all networked games have to solve in order to be functional, and all other problems in networked games ultimately follow from it.

The problem of replication was first  studied in the distributed computing literature as a means to increase the fault tolerance of a system and improve its performance.  In this sense video games are a rather atypical distributed system wherein replication is a necessary end unto itself rather than being just a means unto an end.  Because it has priority and because the terminology in the video game literature is wildly inconsistent, I will try to follow the naming conventions from distributed computing when possible.  Where there are multiple or alternate names for some concept I will do my best to try and point them out, but I can not guarantee that I have found all the different vocabulary for these concepts.

Solutions to the replication problem are usually classified into two basic categories, and when applied to video games can be interpreted as follows:

There are also a few intermediate types of replication like semi-active and semi-passive replication, though we won’t discuss them until later.

Active replication

Active replication is probably the easiest to understand and most obvious method for replication.  Leslie Lamport appears to have been the first to have explicitly written about this approach and gave a detailed analysis (from the perspective of fault tolerance) in 1978:

Lamport, L. (1978) “Time, clocks and the ordering of events in distributed systems” Communications of the ACM

That paper, like many of Lamport’s writings is considered a classic in computer science and is worth reading carefully.  The concept presented in the document is more general, and considers arbitrary events which are communicated across a network.  While in principle there is nothing stopping video games from adopting this more general approach, in practice active replication is usually implemented by just broadcasting player inputs.

It is fair to say that active replication is kind of an obvious idea, and was widely implemented in many of the earliest networked simulations.  Many classic video games like Doom, Starcraft and Duke Nukem 3D relied on active replication.  One of the best writings on the topic from the video game perspective is M. Terrano and P. Bettner’s teardown of Age of Empire’s networking model:

M. Terrano, P. Bettner. (2001) “1,500 archers on a 28.8” Gamasutra

While active replication is clearly a workable solution, it isn’t easy to get right. One of the main drawbacks of active replication is that it is very fragile. This means that all players must be initialized with an identical copy of the state and maintain a complete representation of it at all times (which causes massive information leakage). State updates and events in an actively synchronized system must be perfectly deterministic and implemented identically on all clients. Even the smallest differences in state updates are amplified resulting in catastrophic desynchronization bugs which render the system unplayable.

Desynchronization bugs are often very subtle.  For example, different architectures and compilers may use different floating point rounding strategies resulting in divergent calculations for position updates.  Other common problems include incorrectly initialized data and differences in algorithms like random number generation.  Recovering from desynchronization is difficult.  A common strategy is to simply end the game if the players desynchronize.  Another solution would be to employ some distributed consensus algorithm, like PAXOS or RAFT, though this could increase the overall latency.

Passive replication

Unlike active replication which tries to maintain concurrent simulations on all machines in the network, in passive replication there is a single machine (the server) which is responsible for the entire state.  Players send their inputs directly to the server, which processes them and sends out updates to all of the connected players.

The main advantage of using passive replication is that it is robust to desynchronization and that it is also possible to implement stronger anti-cheating measures.  The cost though is that an enormous burden is placed upon the server.  In a naive implementation, this server could be a single point of failure which jeopardizes the stability of the system.

One way to improve the scalability of the server is to replace it with a cluster, as is described in the following paper:

Funkhouser, T. (1995) “RING: A client-server system for multi-user virtual environments” Computer Graphics

Today, it is fair to say that the client-server model has come to dominate in online gaming at all scales, including competitive real-time strategy games like Starcraft 2, fast paced first person shooters like Unreal Tournament and even massively multiplayer games like World of Warcraft.


To compare the performance of active versus passive replication, we now analyze their performance on various network topologies. Let n be the total number of players, E be the edges of a connected graph on n vertices.  To every edge (i,j) \in E we assign a weight l_{i,j} which is the latency of the edge in seconds.  In the network we assume that players only communicate with those whom are adjacent in E.  We also assume that players generate data at a rate of b bits/second and that the size of the game state is s.  Given these, we will now calculate the latency and bandwidth requirements of both active and passive replication under the optimal network topology with respect to minimizing latency.

In the case of active replication, the latency is proportional to the diameter of the network.  This is minimized in the case where the graph is a complete graph (peer-to-peer) giving total latency of O( \max_{(i,j) \in E} l_{ij} ).  The bandwidth required by active replication over a peer-to-peer network is \Theta(n b) per client, since each client must broadcast to every other client, or \Theta(n^2 b) total.

To analyze the performance of passive replication, let us designate player 0 as the server.  Then the latency of the network is at most twice the round trip time from the slowest player to the server.  This is latency is minimized by a star topology with the server at the hub, giving a latency of O( \max_{(0,j) \in E} l_{0j}).  The total bandwidth consumed is \Theta(b + s) per client and \Theta(s n + n b) for the server.


Since each player must be represented in the state, we can conclude that s \in \Omega(n) and if we make the additional reasonable assumption that b is constant, then the total bandwidth costs are identical.  However, if s is significantly larger than s n, then we could conclude that peer-to-peer replication is overall more efficient. However, in practice this is not quite true for several reasons.  First, in passive replication it is not necessary to replicate the entire state each tick, which results in a lower total bandwidth cost.  And second, it is possible for clients to eagerly process inputs locally thus lowering the perceived latency. When applied correctly, these optimizations combined with the fact that it is easier to secure a client-server network against cheating means that it is in practice a preferred option to peer-to-peer networking.

In the next few articles, we will discuss client-server replication for games in more detail and explain how some of these bandwidth and latency optimizations work.