Straight Line Curves

For the past week or so I’ve been obsessed with the following party trick that I used to use to keep my kids entertained in Italian restaurants. Flip over your paper placemat and grab a pencil. Draw a straight line from the top left-hand corner to a point at distance d down from the top of the right-hand edge, where d is a small distance like a finger-width. This is shown in Step 1 below. Then, draw a straight line from where it hits the right-hand edge to a point at distance d left from the bottom of the right-hand edge. This is shown in Step 2 below Next, draw a straight line from where it hits the bottom edge to a point at distance d up from the bottom of the left-hand edge. This is shown in Step 3 below. Finally, draw a straight line from where it hits the left edge to a point at distance d to the right of the left edge, but this time measured along the line you drew in Step 1. This is shown in Step 4 below.

Now repeat this process, each time measuring the distance d along your old line. Keep going until you run out of space.

See my Straight Line Curves Webpage for an account of my weeklong obsession with these curves, which starts by having fun with a paper placemat and a pencil, ends with the Four Ant Problem, and has some math and programming in between. Oh, and there’s homework if you’re up to it.

Posted in Uncategorized | Leave a comment

Powerpoint Presentation Slides Posted


The powerpoint presentation slides are now available on the download page

Posted in Uncategorized | Tagged , | Leave a comment

Second Edition Now Available!

The second edition is finally back from the printer! Buy it from your local bookstore, or online from amazon.com.

Posted in Uncategorized | Tagged , | 3 Comments

Team Fortress 2: Halloween 2011 Update Released

Team Fortress 2 Halloween 2011 update

The very scary Halloween update has been released. A new terrifying boss, MONOCULUS, roams the new map, the Eyeaduct! Collectible Halloween loot. Read the new comic, the bombinomicon!!

Control point enabled. Get going!

Posted in Uncategorized | Tagged | Leave a comment

Interpolation and Control System Tricks, Part 1: Signals and Systems

This post is the first in a series on a group of related tricks we can file under the category of “interpolation” or “control systems.” This can seem very abstract, but there’s a reason: it’s applicable in so many different areas, as this first post will attempt to show. In this post, I want to discuss some general principles of signals, systems, and interpolation, and give practical examples of where these things come up in everyday video game programming.

Signals

Many of the data we deal with in video games can be viewed as a “signal,” or to use an equivalent term, a “function.” Quite often we deal with signals in the time domain, or more simply, we have some data that changes over time. Here are some examples:

  • The player’s input values. (Mouse movement rates, whether or not keys are depressed at that moment, etc)
  • The player character’s velocity.
  • The health of a character.
  • The visual size of the character’s health meter.
  • The positions and orientations of player characters and other objects in the world.
  • The current lock-on target.
  • Which context-sensitive camera mode is currently active? For example, a close-up up on a taunt, a hint showing a door that opened as a result of the lever you just pulled, an adjustment to zoom to highlight an acrobatic element that can be interacted with, and so forth.
  • The “ideal” camera position, given the current state affairs in the game world: the selected camera mode, character positions, obstacles that must be avoided, etc.
  • The player’s current selection from some menu.
  • The screen-space position oi the visual cursor showing which menu element is highlighted.
  • A target’s position that an AI entity is trying to aim at
  • The authoritative position of an object, received periodically from the game server, when we are a client.
  • The proper position a character should be so that his animation lines up with some other object or character he is participating with.
  • The raw cursor position from the Wiimote.
  • The position of the cursor on the screen.
  • The data we get from Kinect libraries telling us where it thinks the player’s arms and legs are.
  • The actual positions and orientation of the bones used to animate the on-screen representation of the player.
  • …and I’m sure you can think of many more examples.

Continuous and discrete signals

You will notice that some of the examples signals take on values from a continuous domain, such as the positions of characters and other objects in the world, or mouse movement rates. Those are the signals that we store using floating point data types, whether they be 1D scalar values or 2D or 3D vectors. Other signals may only assume values from a discrete set, for example, menu selection, discrete camera modes, lock-on targets, whether or not the ‘W’ key is currently depressed, and so forth. We use bools, ints, and enums for these.

Just because a signal comes from a continuous domain does not mean it is continuous function. For example, a character’s health may be constant and then take a sudden jump up or down if they get shot or pick up a potion, etc. An AI entity may suddenly switch targets, so that even if each individual target moves around in a continuous fashion, the signal representing “the position that the AI wants to be aiming at” may exhibit discontinuities. Characters can warp and spawn. A client depending upon a game server for the authoritative value of some continuous variable is only going to receive that variable in bursts. One of the main reasons to understand control systems theory is to know how best to mask these discontinuities, making a smooth output signal out of a jumpy input one.

Of course, since we are working in a digital computer, in reality all of our signals are not truly continuous in time, because we only know (or care about) its value for particular time values, for example, perhaps at 30 different times per second. But we still consider the underlying signal to be continuous, even if we are only sampling an input signal or updating an output signal once per frame.

Must every signal be a function of time?

All of the examples we’ll look at are time-domain signals, but before moving one, it’s worth mentioning that there are certainly signals from other domains. For example, a visual image can be thought of as a function of the continuous screen space coordinates, defined at any (x,y) coordinates, not just at the pixel “centers.” (We could use raytracing to determine the value.) Practical images are represented as a discrete grid of pixel values (a bitmap); how to best sample the continuous signal to determine the pixel colors is a very important topic, one that can only be fully understood using the tools of signal processing theory. (It is NOT simply the value of the continuous signal at the “center” of the pixel!)

Systems

Video games contain lots of code that, when viewed from the standpoint of signal processing theory, is a system. A system is basically any process, function, algorithm — or block of video game code — that takes one or more input signals, and produces one or more output signals. The list of example signals given previously was hopefully highly suggestive, and you already thought of these examples of systems:

  • We typically do not map the player’s movement directly from the input. For example, when the player presses the ‘W’ key to move forward, he doesn’t snap into motion; likewise, when he releases it, he doesn’t instantly stop. There is a some logic that decides how to nicely ramp his velocity in and out. A system is taking the raw input signal of the current status of the `W’ key and mapping it to the actual player velocity.




  • Console controllers are notorious for being “noisy”, if we just scaled the raw value as our velocity, you couldn’t stand still.
  • We might want the chracter’s health bar to visually animate, for an extra level of polish, even if the underlying health value jumps.
  • Camera control is full of control systems. Often, there is one system that takes as input the current character position, lock on target, special interaction, etc, and produces an “ideal” camera position. But this position cannot be used directly, it is extremely jerky. The camera would snap suddenly when you switched lock-on targets. The character movement might need to be “jerky” for gameplay reasons (be able to stop on a dime), but we might want the camera to not have this rigid, mechanical feeling. So another system takes the raw ideal position and smooths it out, to provide the transitions between different modes and smooth out jerky movement.
  • The AI cannot just snap to aim at his desired target. The assassin’s arm, the gun, the sentry turret, or whatever, would snap with infinite velocity; this is not realistic. And if the AI happens to be shooting at you, it probably won’t seem fair or fun, either.
  • We might want the visual highlight over the user’s selection to animate smoothly, rather than just snapping into place.
  • The raw data from the Wiimote and the Kinect camera are noisy. We must filter it or else the pointer and avatar will be jitterly

As you can see, many of these examples can be put into the category of “making things feel right.” Different types of systems will “feel” differently and have different performance characteristics. It’s very helpful to have a library of systems to choose from.

In the next post, I will consider some very special signals that are able to capture common behaviour in signals: they are the impulse, step, ramp, and sinosoid. By analyzing how our system responds to these special signals, we will learn a lot about how our system behaves in general.

Posted in Uncategorized | 1 Comment

Team Fortress 2 Manniversary update released!

The Manniversary update has been released for Team Fortress 2! The TF2 store is one year old! (Which is about six months longer than I’ve been working on the game.) Read about it on the TF2 blog.

Posted in Uncategorized | Tagged | Leave a comment

Constructing a Rotation Matrix From Basis Vectors

Click Here for a YouTube Video

Posted in Uncategorized | Tagged | Leave a comment

Geometric Primitives Lectures

I’m going to be out of town on September 27-30, 2011 presenting two papers at GAMEON-NA, so here are videos of the lectures that I would have given to my Game Math and Physics class at the University of North Texas. A big shout-out to my students who are supposed to be watching them in my absence.

Lecture 1 at YouTube

Click to see a video of Lecture 1 at youtube.com

AABB Animation

Click to see an animation of Axially Aligned Bounding Boxes.

Lecture 2

Click to see a video of Lecture 2 at youtube.com.

Posted in Uncategorized | Leave a comment

Gimbal Lock

Gimbal lock video.

Click image for Gimbal Lock video at YouTube.

Posted in Uncategorized | Tagged | Leave a comment

Detecting Whether Two Convex Polygons Overlap

In a previous post I described how to tell whether two axially aligned bounding boxes (AABB’s) intersected. The solution was presented as a process of elimination: if box A is to the left of B, to the the right of B, above B, or below B, then they cannot intersect.

This rule is a special case application of the separating axis theorem. This theorem says that if two 2D convex polygons don’t intersect, then there exists some line (known as a “separating axis”), such that when the polygons are projected onto that line, the projections of the two polygons will not overlap.

Here are two non-overlapping polygons. The black line is a separating axis for these two polygons, because the projections of the polygons onto that axis (the thick green and blue blues) do not overlap.

A separating axis is really just a direction (we would describe the axis as a vector). Translating the axis does not affect the polygon projections.

This immediately suggests an algorithm to determine if two convex polygons intersect: check all the potential separating axes; if the polygon projections overlap on all such axes, then they polygons intersect. Otherwise, as soon as a separating axis is found, we can declare the polygons to be non-intersecting.

Unfortunately, there are an infinite number of directions. Fortunately, as it turns out, we must only try a limited subset of those directions. In 2D, we need only test the directions that are perpendicular a polygon edge. For example, in the example above, the separating axis is perpendicular to the leftmost edge of the polygon on the right. For 2D AABB’s, all of the polygon edges are horizontal or vertical, so there are only two possible separating axis, one vertical and horizontal. (There were four cases to test, two per axis, because nonoverlapping polygons may be on either side of each other.)

We now have a strategy for detecting the overlap of 2D convex polygons. Scan all the edges of both polygons. For each edge, we form a candidate separating axis that is perpendicular to that edge. We project the two polygons onto the axis, and test whether their projections overlap. If not, the polygons are non-intersecting, and we are done. If they overlap, keep searching. If the polygons are overlapping on all candidate separating axes, then we can declare them to be overlapping — that is what the separating axis theorem guarantees us.

But what exactly does it mean to project the polygons onto an axis? How do we represent that projection, in such a way that we can tell if the projections overlap? The dot product is the answer. Let’s translate our axis so that it contains the origin. (We said that this does not change whether the projections overlap.) Now we can describe the the projection of any given polygon vertex onto this axis by giving the signed distance from the projected vertex to the origin. This distance just so happens to be precisely what we get when we dot the vertex with a unit vector parallel to the axis, which we’ll call v.

The dot product gives us a tool to describe the projection of a polygon onto each candidate separating axis v. For each polygon vertex, we take the dot product of the vertex with v, which gives us a signed distance from the origin. We collect the minimum and maximum of these distance, which gives us the extents of the projection, labeled amin and amax in the figure above. Note that we we don’t know or care ahead of time which points will be a1 and a2 as the figure above might lead you to believe; all we need are amin and amin. A simple one dimensional overlap test of amin and amax against the bmin and bmax tells us if the projections overlap.

In fact, v need not be a unit vector at all. If we scale v by some factor k, then that will scale the numeric values of amin, amax, bmin, and bmax by this same factor, and they will no longer measure the distance to the origin, but it will not affect whether the intervals overlap.

The following code snippet illustrates these ideas.

// Very simple vector class
struct Vec2D { float x,y; };
 
// Dot product operator
float dot(const Vec2D &a, const Vec2D &b)
{
    return a.x*b.x + a.y*b.y;
}
 
// Here is our high level entry point.  It tests whether two polygons intersect.  The
// polygons must be convex, and they must not be degenerate.
bool convexPolygonOverlap(
    int aVertCount, const Vec2D *aVertList,
    int bVertCount, const Vec2D *bVertList
) {
 
    // First, use all of A's edges to get candidate separating axes
    if (findSeparatingAxis(aVertCount, aVertList, bVertCount, bVertList))
        return false;
 
    // Now swap roles, and use B's edges
    if (findSeparatingAxis(bVertCount, bVertList, aVertCount, aVertList))
        return false;
 
    // No separating axis found.  They must overlap
    return true;
}
 
// Helper routine: test if two convex polygons overlap, using only the edges of
// the first polygon (polygon "a") to build the list of candidate separating axes.
bool findSeparatingAxis(
    int aVertCount, const Vec2D *aVertList,
    int bVertCount, const Vec2D *bVertList
) {
 
    // Iterate over all the edges
    int prev = aVertCount-1;
    for (int cur = 0 ; cur < aVertCount ; ++cur)
    {
 
        // Get edge vector.  (Assume operator- is overloaded)
        Vec2D edge = aVertList[cur] - aVertList[prev];
 
        // Rotate vector 90 degrees (doesn't matter which way) to get
        // candidate separating axis.
        Vec2D v;
        v.x = edge.y;
        v.y = -edge.x;
 
        // Gather extents of both polygons projected onto this axis
        float aMin, aMax, bMin, bMax;
        gatherPolygonProjectionExtents(aVertCount, aVertlist, v, aMin, aMax);
        gatherPolygonProjectionExtents(bVertCount, bVertlist, v, bMin, bMax);
 
        // Is this a separating axis?
        if (aMax < bMin) return true;
        if (bMax < aMin) return true;
 
        // Next edge, please
        prev = cur;
    }
 
    // Failed to find a separating axis
    return false;
}
 
// Gather up one-dimensional extents of the projection of the polygon
// onto this axis.
void gatherPolygonProjectionExtents(
    int vertCount, const Vec2D *vertList, // input polygon verts
    const Vec2D &v,                       // axis to project onto
    float &outMin, float &outMax          // 1D extents are output here
) {
 
    // Initialize extents to a single point, the first vertex
    outMin = outMax = dot(v, vertList[0]);
 
    // Now scan all the rest, growing extents to include them
    for (int i = 1 ; i < vertCount ; ++i)
    {
        float d = dot(v, vertList[i]);
        if      (d < outMin) outMin = d;
        else if (d > outMax) outMax = d;
    }
}

Let’s hasten to add a few words of warning. The first is a warning concerning robustness: this code assumes valid convex polygons. If any of the edges are degenerate, or if the polygon is concave, this routine can produce unexpected results. Of course, the separating axis theorem applies only to convex polygons. Detecting the intersection of concave polygons is a trickier business. The second warning is concerning performance. The algorithm above is clearly O(N2). For large values of N, different algorithms can be faster. A standard approach is to sort the vertices vertically, and then sweep a horizontal line through the polygons, stopping at “event points” to see if the polygons have horizontal overlap at the given location. With a small bit of extra work, this approach can also be made to work for concave polygons.

The separating axis extends naturally into 3D, to test whether convex polyhedra overlap. It is this use of the theorem that gets the most attention in video games, since this test is needed in collision detection. The set of potential separating axes starts with vectors perpendicular to the polygon faces, which is analogous to what we did here in 2D. Unfortunately, in 3D things are a bit trickier and this list is not sufficient; there are cases where the polyhedra to not intersect, but a separating axis cannot be found among this simple set. We must also include certain cross products in the candidate set.

Posted in Uncategorized | Tagged | 2 Comments