Monday, 26 November 2012

Project "Lead" Update #4

Update #3: http://ahamnett.blogspot.co.uk/2012/11/project-lead-update-3.html

What features have you added since the last blogpost?

My focus after the last update has been to get more content into the project. As such, I've begun to design some very basic art assets for the HUD and characters, as well as taking some time recently to add in placeholder sounds.


My favourite addition so far has been that of the health bar, which I feel looks great and has good gameplay implications (namely that you won't be able to tell exactly how close you are to dying). In addition, the smaller changes like the "mannequin" sprites and weapon sound effects have added a lot to the game's feel without taking too long to implement.

What technical aspects have you explored for these features?

The health bar was a good experience for me because it allowed me to briefly dip my toes into pixel shaders, a subject which I respect but haven't attempted to look into before. I was able to get a very basic idea of what shaders are capable of and it may influence some of my later design decisions (i.e. a Monaco-style line of sight system.

In creating the health bar, I also briefly experimented with stencil buffers. However, I had to abandon this approach because it was limited to 100% or 0% visibility for pixels, as such it was not able to create the gradual fades that I wanted for the health bar.

What features are you planning on working on next?

At this point, I want to continue refining the overall appearance of the project. I feel that a lot of the core mechanics are in place currently and as such I want to get the project as close as possible to a releasable state in order to get a better idea of what areas need to be improved or expanded.

I have some ideas about borders for the HUD assets so they blend in better, such as a S.T.A.L.K.E.R. style metal border. I also want to look into sprite animations, specifically for actor walking and weapon reloading. There's also more sounds that could be added such as actor pain, background music, environmental sounds, etc. Finally, I want to continue adding to the arena level with more objects like cars, traffic lights, bins, benches, etc.

Monday, 12 November 2012

Project "Lead" Update #3

Update #2: http://ahamnett.blogspot.com/2012/09/project-lead-update-2.html

What features have you added since the last blogpost?

Since the last update, the first thing I worked on was attempting to serialise the level data and the beginnings of a level editor. Serialising the level data has greatly optimised the content pipeline, converting what was once a chunk of hard-coded assets into a single external XML file which can be easily editted and loaded using the default XNA content manager.


The level editor is a modification of a small unfinished project I started last summer. Unfortunately the project needs to be significantly rewritten as there are several poor and impractical design choices evident, i.e. overuse of the static keyword. In addition, a lot of the elements need to be replaced with the equivalents from the "Lead" project, leading to a lot of "square peg in a round hole" issues.

What technical aspects have you explored for these features?

In order to serialise all of the objects I have in my game world, I've had to look more extensively into XNA's content serialisation. Specifically, I've had to write my own custom content writers and readers in order to ensure an optimal file format where all the necessary data still gets loaded.

Has any progress been made in gameplay?

I've decided to develop a small, simple arena shooter in order to keep the project feasible currently. As such, I've spent time creating some basic assets in order to create the beginnings of a typical intersection.

The debug box you can see in the screenshot is a recent test for repeated objects called "props". They have a texture and collision polygon and will be able to be quickly placed in the level as an entity rather than specified by vertices as the wall and floor planes are.


In addition, I've also been working on typical wave-based systems for spawning enemies. There is currently a wave manager in place which keeps a track of the wave number, enemies spawned, number remaining and has a timer to allow an intermission between waves. I've also added enemy spawners to allow the enemies' spawn positions to be specified. Currently they're only set to spawn outside and walk to a specific point, though I plan to add deviance later.

What features are you planning on working on next?

I plan to continue working on assets until a very rough vision of the final product starts to take shape. In particular, I have an animated health bar idea, similar to the early Resident Evil games, which I plan on implementing next. I've been looking into using a stencil buffer to create the effect so as to prevent having large spritesheet animations.

After that it will be more assets for both the world and characters. I plan to add typical objects like cars, crossings, traffic lights, benches, etc., as well as creating my player and enemy sprites. I also need to work on a system for spawning or dropping new weapons as a form of power up, though that will probably come later.

Monday, 15 October 2012

Funnel Algorithm

Red edge-centre path and green optimal path.
When pathfinding for an agent, the typical process is compromised of two steps. First, find a list of nodes leading to the goal using an algorithm like A* or Dijkstra's. Then convert this list into a set of discrete path vertices for the agent to use.

When using triangular nodes, such as those in a navigation mesh, the most commonly suggested method for the second step is to simply use the midpoints of each edge which connects two triangles. This path is also often smoothed by stepping through the list and removing redundant vertices.

The problem with this approach is that, depending on how large the triangles are, this can lead to paths which are inefficient and unnatural looking. Since only the centre points are considered, corners are usually not traversed very closely, causing sub-optimal zig-zag behaviour. The funnel algorithm is an alternative which produces a more efficient path solution by expanding a series of left and right edges (funnels) to find the vertices of the path.

Theory


Image courtesy of Mikko Mononen.
The idea behind the funnel algorithm is that by traversing the connecting edges (portals) in a triangle list (channel) we can find the points where a vertex is necessary (i.e. to traverse a corner).

In the figure provided, we can see that in step A our funnel is initialised with an apex (the start) and our left and right sides (the vertices of the first portal). Steps B through D then step through each of the portals, updating the left and right sides as necessary.

In step E we have a problem. If we continue stepping through the portal vertices as we've been doing previously then the funnel will actually widen and consequently intersect the edges of the channel. As such we skip updating the left edge and move onto step F.

Unfortunately there's now another problem. In updating the right side of the funnel, we've caused it to cross over the left side. As such we know that there is a corner in the channel and a new path vertex is required (the current left position). In step G we set the apex to this position, add it to our list of path vertices, and reinitialise the left and right sides. The algorithm then continues until the goal is found.

To summarise, here's the steps for updating the funnel algorithm:
  • Attempt to narrow funnel.
  • If new endpoint widens funnel, skip.
  • If new endpoint crosses over other side, add new vertex and reinitialise funnel.

Implementation


First we want to abort if the node list is not long enough for a true search. The two cases for this are if we only have one node in the list (both points are in a single triangle) or if there are no nodes (no path found).

For the former, we can simply return a new list containing just the end point (since if they're in the same triangle then there are no static obstacles in the way). For the latter, we should return some sort of error code so that the agent knows no path is available. For my implementation, I've chosen to simply return a list containing only the start point, however you could also send a message to inform the agent.
// Abort if list is empty.
if (nodeList.Count <= 0)
{
    return new List<Vector2>() { start };
}

// Return immediately if list only has one entry.
if (nodeList.Count <= 1)
{
    return new List<Vector2>() { end };
}
Next we need to initialise our variables. We want a list for our path vertices, arrays for our our portal vertices, a vector for our current apex and finally indexes for our current left and right endpoints.

My nodes have 2 properties, the vertices that make up the triangle and a list of references to connected or "local" triangles. As such, I need to loop through my node list to determine what the portal vertices are. I also store the vertices on each end of the channel which aren't part of a portal (lines 25-33 and 36-49). The reasoning for this will be explained later.
List<Vector2> path = new List<Vector2>() { start };
Vector2[] leftVertices = new Vector2[nodeList.Count + 1];
Vector2[] rightVertices = new Vector2[nodeList.Count + 1];
Vector2 apex = start;
int left = 1;
int right = 1;

// Initialise portal vertices.
for (int i = 0; i < nodeList.Count - 1; i++)
{
    for (int j = 0; j < nodeList[i].LocalNodes.Length; j++)
    {
        if (nodeList[i].LocalNodes[j] == nodeList[i + 1])
        {
            int k = j + 1 >= nodeList[i].LocalNodes.Length ? 0 : j + 1;

            leftVertices[i + 1] = ((NavTriangle)nodeList[i]).Points[j];
            rightVertices[i + 1] = ((NavTriangle)nodeList[i]).Points[k];
            break;
        }
    }
}

// Initialise portal vertices first point.
for (int j = 0; j < ((NavTriangle)nodeList[0]).Points.Length; j++)
{
    if (((NavTriangle)nodeList[0]).Points[j] != leftVertices[1]
        && ((NavTriangle)nodeList[0]).Points[j] != rightVertices[1])
    {
        leftVertices[0] = ((NavTriangle)nodeList[0]).Points[j];
        rightVertices[0] = ((NavTriangle)nodeList[0]).Points[j];
    }
}

// Initialise portal vertices last point.
for (int j = 0;
    j < ((NavTriangle)nodeList[nodeList.Count - 1]).Points.Length; j++)
{
    if (((NavTriangle)nodeList[nodeList.Count - 1]).Points[j]
        != leftVertices[nodeList.Count - 1]
        && ((NavTriangle)nodeList[nodeList.Count - 1]).Points[j]
        != rightVertices[nodeList.Count - 1])
    {
        leftVertices[nodeList.Count]
            = ((NavTriangle)nodeList[nodeList.Count - 1]).Points[j];
        rightVertices[nodeList.Count]
            = ((NavTriangle)nodeList[nodeList.Count - 1]).Points[j];
    }
}
Once initialised we begin to loop through the portals, narrowing our current funnel. First, we have to check that the new vertex isn't the same as the current one, in which case we skip. We also want to skip if the new side widens the funnel. Finally, we need to check whether the new side crosses the other, if it does then we need to update our funnel, else we can simply update the current edge and move on.
// Step through channel.
for (int i = 2; i <= nodeList.Count - 1; i++)
{
    // If new left vertex is different, process.
    if (leftVertices[i] != leftVertices[left]
        && i > left)
    {
        Vector2 newSide = leftVertices[i] - apex;

        // If new side does not widen funnel, update.
        if (GeometryHelper.VectorSign(newSide,
                leftVertices[left] - apex) > 0)
        {
            // If new side crosses other side, update apex.
            if (GeometryHelper.VectorSign(newSide,
                rightVertices[right] - apex) > 0)
            {
                // Find next vertex.
                for (int j = next; j <= nodeList.Count; j++)
                {
                    if (rightVertices[j] != rightVertices[next])
                    {
                        next = j;
                        break;
                    }
                }

                path.Add(rightVertices[right]);
                apex = rightVertices[right];
                right = next;
            }
            else
            {
                left = i;
            }
        }
    }

    // If new right vertex is different, process.
    ...
}
VectorSign() is a simple method which checks which side of a line another is on. If the line is on the right then it returns 1, else it returns -1.

Non-Point Agents


Red normal path, green adjusted path.
Assuming your agents are not points and that they do collide with the environment then you will notice a problem with the above implementation. In situations where the corners of the channel define physical obstacles in the environment, you will get odd agent behaviour where they travel straight towards the corner rather than moving to travel around it.

This is because the above implementation does not compensate for the size of the agents, simply using the vertices of the nodes. As such you will often encounter situations where agents end up stuck against corners until they can change vertices or sometimes permantently.

Solving this problem is relatively straightforward though. We simply need to offset the vertex positions from the corners to allow the agents to more easily get around them. You could also imagine it like adding a circle of the same radius to each of the channel vertices and navigating around those.

Implementation


In order to offset the position such that the agent has a clear path to both approach and continue from the vertex, we want to calculate its normal. This way we can simply scale by the agent's radius and add that to the position to correctly offset it.

First, we need to find the edge vertices before and after the current one. This is the reason why we needed to include the end points of the channel in our vertex lists, to account for the first and last portals. In my implementation this simply involves looping backward and forward through my vertices until I find a different one (lines 9-16 and 19-26).

We then calculate the angles of the lines formed between the previous-current and current-next segments. We take the average angle and construct a vector perpendicular to it (either left or right depending on if the vertex is on the right or left edge respectively) to create the vertex normal. We then scale this normal by the agent's radius and add it to the position. The rest of the code is the same as before.
// If new side crosses other side, update apex.
if (GeometryHelper.VectorSign(newSide,
    rightVertices[right] - apex) > 0)
{
    int next = right;
    int prev = right;

    // Find next vertex.
    for (int j = next; j <= nodeList.Count; j++)
    {
        if (rightVertices[j] != rightVertices[next])
        {
            next = j;
            break;
        }
    }

    // Find previous vertex.
    for (int j = prev; j >= 0; j--)
    {
        if (rightVertices[j] != rightVertices[prev])
        {
            prev = j;
            break;
        }
    }

    // Calculate line angles.
    float nextAngle = (float)Math.Atan2(
        rightVertices[next].Y - rightVertices[right].Y,
        rightVertices[next].X - rightVertices[right].X);
    float prevAngle = (float)Math.Atan2(
        rightVertices[right].Y - rightVertices[prev].Y,
        rightVertices[right].X - rightVertices[prev].X);

    // Calculate minimum distance between line angles.
    float distance = nextAngle - prevAngle;

    if (Math.Abs(distance) > MathHelper.Pi)
    {
        distance -= distance > 0 ? MathHelper.TwoPi
            : -MathHelper.TwoPi;
    }

    // Calculate left perpendicular to average angle.
    float angle = prevAngle + (distance / 2)
        + MathHelper.PiOver2;
    Vector2 normal = new Vector2((float)Math.Cos(angle),
        (float)Math.Sin(angle));

    // Add new offset apex to list and update right side.
    path.Add(rightVertices[right] + (normal * size));
    apex = rightVertices[right];
    right = next;
}
else
{
    left = i;
}
Obviously it doesn't hurt to have some additional steering behaviours to further improve performance, but this implementation helps prevent agents from getting hung up on corners as easily.

Further Reading

Monday, 10 September 2012

Project "Lead" Update #2

Previous Blogpost:
http://ahamnett.blogspot.com/2012/07/project-lead.html

What features have you added since the last blogpost?

In the last update, I said that I'd be working on weapons, animations, AI and some art assets. Of those four I pursued two, the weapons and AI.

I've extended the weapon structure to allow for different types with different fire modes (assault rifles, shotguns, etc.) as well as moving some of the statistics into an external XML format. As such, I can now add new variants of existing weapon types by just creating a new XML file and including it in the WeaponManager class.

AI has been the biggest area of growth however. The previously created targets now have some rudimentary AI, which allows them to wander about the environment, investigate sources of sound (like gunfire) and chase the player if they are spotted.


What technical aspects have you explored for these features?

For these features I've had to research XML serialisation in order to make some areas of the project external. In addition, I've also had to refresh my knowledge on AI basics like Finite State Machines, Pathfinding Algorithms and Navigation Meshes. In particular, I've been looking into a technique known as the Funnel Algorithm for generating the path vertices from a list of triangle nodes.

What makes the Funnel Algorithm special?

One of the more commonly suggested methods for generating path vertices from a list of triangles is to simply take the centre point from each connecting edge and then remove any redundant vertices using a path smoothing algorithm. The problem with this approach is that, depending on how large the triangles are, this can cause odd-looking behaviour where characters will walk in a wide arc around corners.


The Funnel Algorithm instead works using the edges of the triangle corridor, or "channel", that you create from pathfinding. This way, any corners that need to be navigated around are done so as efficiently as possible. In addition, the algorithm output does not require any post-smoothing since it finds the furthest visible vertex position before moving onto the next.

What features are you planning on working on next?

Currently I'm working on making the level data (floors, walls, etc.) serialisable so that it's easier to edit without having to change any code. This will be somewhat more interesting than the weapon statistics and navigation mesh because it will involve custom content readers for referential entities like the player and weapons.

After that, it depends. Development will likely soon be shifting onto a more focused approach in order to develop an actual game (rather than a tech demo for various game elements). As such, I may consider taking some time out to develop a level editor so that I can more easily create and edit map data. In addition, I would likely also spend some time trying to develop a navigation mesh generation algorithm to go along with it.

Wednesday, 22 August 2012

Circular Progress/Loading Bars

Progress bars are a common feature in game UI design when representing mechanics such as character interaction, weapon reloading, power meters, etc. A circular progress bar is simply one that fills/empties radially rather than linearly. They have become more popular recently as they are more appealing to the eye than the traditional straight progress bar.

In this blog post, we'll be examining the techniques one could use for designing a circular progress bar as well as their advantages and disadvantages.

Sprite Animations

Sprite animations follow a common game designer philosphy that it's easier to simply display a sequence of images rather than individually updating segments in the game code. The idea is that each image in the series shows the bar more filled than the last, such that when showed in sequence, it appears that the bar is filling. This is an extremely popular approach when creating the activity symbol on loading pages as it requires very little processing power.
Left 4 Dead 2Ghost Recon Future Soldier
The main disadvantage of this implementation is that, unless you use a very large spritesheet, the animation can end up choppy or blocky in nature. For example, in the Left 4 Dead 2 progress bar (pictured above) there are 10 very clear phases (or frames) to the animation, causing a jerky, unsmooth motion. For the purposes of a simple activity bar though, these forms of animations will suffice.

Likewise for interaction progress bars, the size of the spritesheet affects the performance. If you use a minimal spritesheet then the bar will appear choppy for longer interaction times but take up less memory, where as if you use a large spritesheet then the bar will appear smoother but take up more memory.

Primitive Manipulation

This approach still uses a sprite but only consisting of one frame, the completed circle, to avoid having to manually render one. The idea is that we divide the rendering of the sprite into several primitives and manipulate those to create the filling effect.


As you can see, we divide the top section into two (sections 0 and 4) to create a circle/bar which fills starting from the top. From here, we want to gradually expand each of the primitives in a clockwise motion to create the effect of the circle/bar filling.

To get a basic idea of how drawing using primitives works, see Microsoft's Using a Basic Effect with Texturing tutorial. The only difference is that we also use a Centre and UpperCentre position for the top 2 triangles and we will be drawing 5 primitives instead of just 2.

Initialisation

To start with, we first need to calculate how full the circle should be. Assuming that your progress bar has properties for the current, min and max values, the implementation for such might look like this:
float percent = (Value - MinValue) / (MaxValue - MinValue);
Next, we need to determine the index of the primitive we're going to manipulate. First, we want a number between 0 and 4, so we multiply our previously calculated percentage by 4 to get this range.

However, we have a problem in that the transition points (where the ouput changes from 0.99 to 1, for example) are at 0.25, 0.5, 0.75 and 1, which is not right. As such, we need to shift the result by adding 0.5 to get the output we want. Now if we input 0.25, we get a result of 1.5 (halfway through the second segment), which is perfect.
int index = (int)((percent * 4) + 0.5f);

Vertex Calculation

Now we need to calculate the vertex position for the primitive we are adjusting. If we convert the percentage value to an angle between 0 and 2π radians, we can construct a unit vector pointing from the centre of the circle to the vertex position.

To do this we apply a similar methodology as before (value between 0 and 2π radians, offset to start at the top of the circle) to calculate this angle and then construct the vector.
float angle = (percent * MathHelper.TwoPi) - MathHelper.PiOver2;
Vector2 angleVector = new Vector2((float)Math.Cos(angle),
    (float)Math.Sin(angle));
Next, we need to determine a scalar for this unit vector to get the vertex position on the edge of the sprite. This is calculated by comparing the scalars of the X and Y components to get the smallest value (the one which will intersect the edge first).

The reason we use Math.Abs(), or the absolute value, is so that the direction of the unit vector is preserved. The absolute value of a number is the weight or magnitude, i.e. the absolute value of 2 is 2 but the absolute value of -2 is also 2. If we simply used the signed value, we would sometimes get a negative scalar which would create a vector pointing in the opposite direction.
if (Math.Abs((effect.Texture.Width / 2) / angleVector.X)
    < Math.Abs((effect.Texture.Height / 2) / angleVector.Y))
{
    scalar = Math.Abs((effect.Texture.Width / 2) / angleVector.X);
    textureScalar = Math.Abs(0.5f / angleVector.X);
}
else
{
    scalar = Math.Abs((effect.Texture.Height / 2) / angleVector.Y);
    textureScalar = Math.Abs(0.5f / angleVector.Y);
}

Adjusting Vertices

Finally, we adjust the appropriate vertex before rendering the primitives. To start with, we reinitialise the vertex positions and texture coordinates to reset any previous adjustments we may have made. Then we simply change the appropriate vertex using the unit vector and scalars we previously calculated.

The reason I use index + 1 is because we want to adjust the right edge vertex when filling the circle clockwise, not the left. Since my vertices are defined in a clockwise pattern (where the last vertex is the centre), I simply add 1 to my previously calculated primitive index to get the right edge vertex.
vertices[index + 1].Position = Centre
    + new Vector3(angleVector.X * scalar, -angleVector.Y * scalar, 0);
vertices[index + 1].TextureCoordinate = new Vector2(0.5f, 0.5f)
    + (angleVector * textureScalar);
Finally, we render the primitives using GraphicsDevice.DrawUserIndexedPrimitives(). Remember to use index to specificy how many primitives we want to draw. In the case of filling a circle, this is simply index + 1.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
    pass.Apply();

    effect.GraphicsDevice.DrawUserIndexedPrimitives
        <VertexPositionNormalTexture>(PrimitiveType.TriangleList,
        vertices, 0, 7, indexes, 0, index + 1);
}

Unfilling a Circle

Unfilling a circle only involves some minor tweaking to the final section. First, you adjust the left vertex instead of the right, which in the above example only means replacing index + 1 with index.

Then you modify the draw code to start on the appropriate primitive, represented by the fifth argument, and to draw less primitives as index increases, the last argument.
vertices[index].Position = Centre
    + new Vector3(angleVector.X * scalar, -angleVector.Y * scalar, 0);
vertices[index].TextureCoordinate = new Vector2(0.5f, 0.5f)
    + (angleVector * textureScalar);

foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
    pass.Apply();

    effect.GraphicsDevice.DrawUserIndexedPrimitives
        <VertexPositionNormalTexture>(PrimitiveType.TriangleList,
        vertices, 0, 7, indexes, index * 3, 5 - index);
}

Friday, 10 August 2012

Finite State Machines (FSM)

What is a State Machine?

Generally, a state machine is a model for logic systems. It takes the form of an abstract "machine" which can be in one of many different "states". The machine can "transition" between states given some condition has been met.

As an example, a light switch can be modelled using a very basic state machine. It has two states, on and off. If the current state is 'on' and the switch gets toggled then the machine transitions to the 'off' state and vice versa if the current state is 'off'.

How do they apply to games development?

When coding AI (artificial intelligence) for characters, state machines are one of the most common tools in the programmer's arsenal. They provide an intuitive, easy to debug means for creating the various AI behaviours for a character. After all, it's very easy to imagine characters switching between clear behaviours (states), such as 'alerted', 'caution' and 'unaware' in a stealth game.

How could you implement them?

A naive approach would be to simply use a series of if/else statements or a switch structure. For anything more than the most rudimentary of AI behaviour though, this usually leads to messy code that's difficult to debug and hard to follow.
public void Update(GameTime gameTime)
{
    // Change states.
    if (PlayerVisible)
    {
        state = ChaseState;
    }
    else
    {
        // Decide if we should wander or idle.
        if (random.NextDouble() < 0.5)
        {
            state = WanderState;
        }
        else
        {
            state = IdleState;
        }
    }

    // State logic.
    switch (state)
    {
        case ChaseState:
            // Chase logic.
            break;

        case WanderState:
            // Wander logic.
            break;

        // Etc.
    }
}
In this blog post, I'll be discussing a more structured approach that I feel better compartmentalises AI behaviour and keeps the Update() logic easier to follow. It revolves around a central StateMachine class, which manages instances of various State classes.

State Base Class

The base state class will be used for all state behaviour and should declare all of the core functions for any state. For this purpose we'll use a main Update() method for processing logic and Enter() and Exit() methods for initialisation and tidying up when changing states.
abstract class State<T>
{
    public abstract void Update(T owner, GameTime gameTime);

    public abstract void Enter(T owner);
    public abstract void Exit(T owner);
}
We want the state class to be generic, such that it can be used for any character, however if we only use basic object arguments then we won't be able to access any of the class-specific properties and methods for any character. As such, we use the generic type T when declaring the class and functions so that regardless of which character we're creating behaviour for, we can access their properties.

The abstract keyword specifies that both the classes and methods should not be used directly, they must be inherited/overwritten. This concept is similar, though not identical, to the virtual keyword, which means the class/method can be inheritted/overwritten but is still able to be directly used.

StateMachine Class

The state machine class is the class that manages a character's state and should declare methods for the owner to update the state logic and for states to transition between each other. As such we have an Update() method, as well as a ChangeState() and RevertState() method.
class StateMachine<T>
{
    T owner;
    State<T> currentState;
    State<T> previousState;
    State<T> globalState;

    public State<T> CurrentState
    {
        get { return currentState; }
    }

    public State<T> GlobalState
    {
        get { return globalState; }
    }

    public State<T> PreviousState
    {
        get { return previousState; }
    }

    public StateMachine(T owner, State<T> global,
        State<T> current)
    {
        this.owner = owner;
        this.globalState = global;
        this.currentState = current;
        this.previousState = current;
    }

    public void Update(GameTime gameTime)
    {
        if (globalState != null)
        {
            globalState.Update(owner, gameTime);
        }

        if (currentState != null)
        {
            currentState.Update(owner, gameTime);
        }
    }

    public void ChangeState(State<T> newState)
    {
        if (newState == null)
        {
            return;
        }

        previousState = currentState;

        currentState.Exit(owner);

        currentState = newState;

        currentState.Enter(owner);
    }

    public void RevertState()
    {
        ChangeState(previousState);
    }
}
As before, we use generic type T when creating the state machine so that when states are updated, the correct class type can be passed as an argument. In addition, we also store and update a global state which can be used for passive behaviour which is not specific to one state. For example, we might have an enemy who should always change to a chasing state upon spotting the player.

Individual States

Individual states inherit from the base state class, where the desired class is passed in the angle brackets to incorporate the generic type functionality we previously discussed. All the abstract methods are also overriden using this class to access the owner's properties and methods when performing logic.
class EnemyIdleState : State<Enemy>
{
    // Constants/Readonly.
    private static readonly EnemyIdleState instance = new EnemyIdleState();
    const float rethinkTime = 5;

    public static State<Enemy> Instance
    {
        get { return instance; }
    }

    public override void Update(Enemy owner, GameTime gameTime)
    {
        owner.Rethink -= gameTime.ElapsedGameTime;

        if (owner.Rethink <= TimeSpan.Zero)
        {
            owner.StateMachine.ChangeState(EnemyThinkState.Instance);
        }
    }

    public override void Enter(Enemy owner)
    {
        owner.Rethink = TimeSpan.FromSeconds(rethinkTime);
    }

    public override void Exit(Enemy owner)
    {
    }
}
In addition, we also use a concept known as the singleton pattern to prevent multiple copies of identical states from being instanced. The idea is that the class holds an instance of itself, the singleton, which can be accessed using a static property. This way instead of passing new instances when changing states, the singleton can be passed using State.Instance, as seen on line 18.



The concepts discussed in this blogpost are covered in far more detail in Mat Buckland's Programming Game AI by Example book, along with many other AI techniques.

Thursday, 19 July 2012

Top-Down Shooters and Recoil

Any shooter fan who's played even a couple games knows how important recoil is for gun play. It's one of the primary attributes when balancing weapons, for example handicapping a weapon that deals high damage per shot with lots of recoil. Additionally, it can aid immersion, making the player feel even more like they are the soldier/person holding the weapon they're firing.

In this blog post, I'm going to be discussing the potential applications of recoil for a top-down shooter, how they might be implemented and what the advantages and disadvantages of each are.

Spread

Spread is the most common system used for simulating recoil in contemporary shooters. It revolves around the concept that the point on screen that the user is aiming at is the centre of a circle and that the bullets fired from the gun will land anywhere within that circle. Obviously the larger the circle, or "spread", for that weapon then the less likely the bullets will land exactly where the user is aiming.

Usually this circle tends to grow as the player continuously fires their weapon and shrink when they're not, encouraging users to use controlled bursts (as one would do when firing a real firearm) instead of prolonged spraying.


In 3D games, incorporating spread when firing would be accomplished by randomly selecting an angle between 0 and 2π radians and then adding a random offset less than the current amount of spread to the firing direction. For a 2D shooter, the spread is more of a line instead of a circle, but the concept is still the same. The difference is that you only pick from 2 directions, left and right.

Pros:
- Relatively simple to implement.
- Common mechanic that's easily understood.

Cons:
- No ability to "manage" recoil.
- Not very immersive.

For the purposes of simplicity, we'll assume that the player class keeps track of the total recoil, allowing it to persist when changing weapons or performing actions. This value will be incremented when the weapon is fired and decreases gradually over time using the player's Update() method. When we project the shots into our world, we simply adjust the firing angle by a random amount lower than the total recoil.
// Determine which direction to spread in.
float direction = random.NextDouble() > 0.5 ? 1 : -1;

// Determine amount of spread.
float spread = direction * (random.NextDouble() * recoil);

// Construct ray direction.
Vector2 aiming = Vector2.Transform(Vector2.UnitX,
    Matrix.CreateRotationZ(rotation + spread);

aiming.Normalize();
Now the direction is ready for raycasting into the world. It's been slightly offset from the player's rotation by an amount which is less than the total amount of spread in either direction. Obviously, you can combine multiple lines to shorten the implementation and save memory allocation.

View Kick

View kick is also relatively prevelant in modern 3D shooters, though it's usually used more as a slight immersive effect rather than the core mechanic. The idea behind it is that when the user fires their weapon, a slight offset or "kick" is applied to the player's view to mimic the gun recoiling in their hands. Larger calibre weapons would obviously add a greater offset than smaller ones.

Like spread, the view could gradually reset back to its previous position when not firing. Some players find it more difficult to engage multiple targets when the aim is resetting though, so some games simply leave the view where it ends up to avoid this issue.


Incorporating view kick in a 3D shooter is done by selecting a direction which is mostly up, using a similar approach to spread, and adjusting the player's view by an appropriate amount for the weapon held. Once again, for 2D shooters you would only select from two directions (left or right), though you would also add a small amount of noise to the offset to compensate for the lack of a second axis.

Pros:
- More immersive than other approaches.
- Allows user to manage recoil by adjusting aim.

Cons:
- Large amounts of kick can be jarring.

View kick is slightly simpler to implement than spread, as it just involves adjusting the player's rotation. However, the recoil value needs to be incremented both positively and negatively when firing to make it kick in both directions. You would then override the player's rotation property to include this value, such that every time it is updated it "kicks" the player's view.
// Determine which direction to kick in.
float direction = random.NextDouble() > 0.5 ? 1 : -1;

// Calculate random noise.
float noise = (random.NextDouble() > 0.5 ? 1 : -1)
    * (random.NextDouble() * MaxNoise);

// Determine amount of kick.
float kick = (direction * recoilRate) + noise;

// Add recoil.
recoil = MathHelper.Clamp(recoil + kick, -0.25f, 0.25f);
The above implementation assumes that there is a gradual reset and clamps the rotation to stop the player spinning, though you could omit both to permanently alter the player's aim and avoid the issues with resetting. Raycasting proceeds as normal since the offset is applied directly to the player's rotation value, not the aiming angle.

Crosshair Kick

Crosshair kick is a slightly different take on view kick which is more common in simulator-style shooters where head movement can be controlled seperately from weapon aim. It works by applying kick to the weapon and/or crosshair, rather than the entire character. Like view kick, more offset is added for larger calibre weapons.

Unlike view kick, it would be impossible to omit resetting the offset as the crosshair could potentially get stuck at a limit or otherwise scroll off screen. As such, the crosshair must gradually reset when not firing to avoid this problem, similar to how spread reduces when idle.


Crosshair kick is performed using an offset similar to that in view kick. The difference is that rather than immediately applying it to the player's view, the offset is applied to the aiming direction when firing. It might also be necessary to update both the crosshair and the player's arms/weapon to point in the direction of the offset so that the player has a rough idea of where they're aiming.

Pros:
- Allows user to manage recoil.
- Not as jarring as view kick.

Cons:
- Slightly more complicated to implement.

Crosshair kick is implemented in a similar manner to view kick. The difference is that rather than updating the player's rotation, this value is used when adjusting the aim. I prefer to adjust the vector, rather than the angle, as that allows us to more easily determine where the crosshair is on screen (so that we can draw it later).
// Calculate firing angle.
Vector2 aiming = Vector2.Transform(new Vector2(1, recoil),
    Matrix.CreateRotationZ(rotation));

aiming.Normalize();
This way the recoil value can be directly used when drawing the crosshair to place it exactly where the bullets would go. You simply scale the value by how far away the crosshair is from the player. (i.e. If the crosshair is 480 pixels away from the player than you use 480 * recoil.)

Friday, 6 July 2012

Project "Lead"

What is Project "Lead"?

Lead is a small project I work on every so often to explore game development aspects and to keep my programming skills sharp. At the time of writing it is a very rudimentary top-down shooter platform which I hope to eventually evolve into a small game.

Which programming language do you use and why?

I use C# with the XNA tool kit when programming with Lead. XNA provides many tools and libraries which allow me to get into core gameplay development very quickly. C# is also one of the first programming languages I was taught and as such is the one I am most familiar with.


What features does Lead have currently?

Currently Lead has a lot of the fundamental basics of a shooter in place. The character is controllable and the camera follows them correctly. They also correctly collide with walls and obstacles without passing through. The player can also fire a weapon which can trace impacts and deal damage to targets.

Why did you choose to rotate the camera with the character?

Lead is an experimental platform for my ideas. Typically, top-down shooters have the camera positioned directly over the player with no rotation. The problem with this perspective is that it makes it difficult for enemies to sneak up and surprise the player. I'm a big fan of tactical shooters and survival/horror games, as such I wanted to experiment with a more focused camera perspective.

I considered using a kind of "torch" mechanic to only light up the area ahead of the player, but I realised that it might be equally effective to simply position the camera in front of the player. I also settled quickly on the idea of rotating the camera with the player to get around the issue where players using widescreens would see further depending on the camera position.

Demonstrating how the player can see further in some directions with a widescreen

What technical aspects have you explored during development?

During development I've had to look into simple collision detection, including circles, boxes, etc. I've also had to look into camera transformations, specifically a format which allows both triangle primitive transformations and for use in XNA's SpriteBatch class. More recently I've also looked into raycasting for dealing with weapon mechanics, including polygon and circle intersections.

What features are you planning on working on next?

I'm planning to get more in depth with weapon mechanics including recoil/spread, animations and variable firing modes (automatic, burst, etc.). After that I'll be looking into rudimentary AI to create some enemies, which will likely grow to involve pathfinding and enemies reacting to stimulus (such as gunfire). After that I'll likely start working on incorporating some proper art assets and a minimalistic HUD.

Tuesday, 26 June 2012

Ray/Polygon Intersections

Raycasting is a particularly useful functionality in games, allowing you to do things such as cast beams of light and trace bullet paths. In this opening blog post, we'll look at how to deal with casting a ray into 2D space and calculating where it intersects a primitive object.

In 3D space this would be performed by constructing a plane, typically facing the ray origin, and testing if the ray intersects this plane. If an intersection is detected then a check is performed to see if the point is inside the polygon. Further calculations would then be made to find the exact point of intersection.

In 2D however, a much simpler approach would be to simply perform ray/line intersection tests for each side of the polygon. The algorithm could be optimised for first-intersection tests by only checking the segments on the half of the polygon closest to the ray origin. For the purposes of simplicity though, the approach I'm going to demonstrate simply tests all of the segments, as the number of vertices I will be dealing with are very small.

Ray/Line Intersection

The first step is the ray/line intersection test. XNA does possess a Ray structure, but this is more designed for 3D space and contains a lot of superfluous data and methods. As such, we'll represent a ray by simply passing two vectors, one representing the origin and the other representing the direction. As well as detecting whether we have an intersection, we will also want to detect where along the ray the intersection is using a float out argument.

Lines 4-13 eliminate the case where both lines are parallel, as this is not as straight-forward to deal with and would likely be intersecting another side of the polygon anyway. We construct a vector representing the segment from the start to end. If the dot product of the ray direction and the left perpendicular of the line segment is less than 0, then the lines are parallel and we can return false.

Lines 15-18 then deal with the more complex task of determining where an intersection would take place. This is done by taking the vector connecting the ray's origin and a point on the line, which we will call d, and dividing the dot product of this and the line segment by the previously calculated perpendicular/ray dot product. This gives us t, our depth of intersection on the ray. The variable s is the depth of intersection on the line segment. This is calculated by dividing the dot product of the right perpendicular of the line segment and vector d by the perpendicular/ray dot product.

Both s and t are scalars for the direction vectors. As such, if s is less than 0 or greater than 1 then the intersection point cannot be on the line segment, as it is futher away (or in the opposite direction in the case of values less than 0) than the segment connecting the two points. Likewise, t must be greater than 0 to indicate that the intersection is not in the opposite direction of the ray.
public static bool RayLineIntersect(Vector2 ro, Vector2 rd,
    Vector2 l1, Vector2 l2, out float t)
{
    Vector2 seg = l2 - l1;
    Vector2 segPerp = new Vector2(seg.Y, -seg.X);
    float perpDotd = Vector2.Dot(rd, segPerp);

    // If lines are parallel, return false.
    if (Math.Abs(perpDotd) <= float.Epsilon)
    {
        t = float.MaxValue;
        return false;
    }

    Vector2 d = l1 - ro;

    t = Vector2.Dot(segPerp, d) / perpDotd;
    float s = Vector2.Dot(new Vector2(rd.Y, -rd.X), d) / perpDotd;

    // If intersect is in right direction and in segment bounds, return true.
    return t >= 0.0f && s >= 0.0f && s <= 1.0f;  
}

Ray/Polygon Intersection

Now that we have the method for calculating whether a ray intersects a single line segment, we can determine if there is an intersection on a polygon by looping through each of the segments connecting the vertices. Of course at this stage we also need to calculate some additional information which would be useful when resolving collisions.

Lines 17-20 deal with where the ray first intersects the polygon, as this would be useful for tasks such as creating bullet impacts. This is done by comparing the t value from each of the line intersection tests at each positive test and continually storing the smallest value (the closest to the origin). A point of intersection can also be updated by simply taking the ray origin and adding the scaled ray direction segment to it.

Lines 22-23 then calculate the normal to the first intersecting edge, as this would be useful for physical projectiles such as grenades, which may bounce off obstacles. This is simply a case of taking the segment connecting the current vertices and storing the perpendicular.
public static bool RayPolygonIntersect(Vector2 ro, Vector2 rd,
    Vector2[] polygon, out float t, out Vector2 point, out Vector2 normal)
{
    t = float.MaxValue;
    point = ro;
    normal = rd;>

    // Comparison variables.
    float distance;
    int crossings = 0;

    for (int j = polygon.Length - 1, i = 0; i < polygon.Length; j = i, i++)
    {
        if (RayLineIntersect(ro, rd, polygon[j], polygon[i], out distance))
        {
            crossings++;

            // If new intersection is closer, update variables.
            if (distance < t)
            {
                t = distance;
                point = ro + (rd * t);

                Vector2 edge = polygon[j] - polygon[i];
                normal = Vector2.Normalize(new Vector2(-edge.Y, edge.X));
            }
        }
    }

    return crossings > 0 && crossings % 2 == 0;  
}
If the number of crossings occurred is above zero and perfectly divisible by 2 (i.e. the ray origin is outside the polygon) then we know we have an intersection. The GIF below demonstrates a quick sample I constructed using these two methods. The normal data has been omitted, however the point of intersection (and by extension the depth t) is visible as the small maroon square appearing on the surface of the octagon.


The concepts discussed are a slight modification of those found in the following blogpost: http://afloatingpoint.blogspot.co.uk/2011/04/2d-polygon-raycasting.html