The last piece of coursework I submitted in the second year of my degree was a program in Java which simulated three flocking behaviours; Cohesion, Separation and Alignment. I then chose to extend this further by adding a predator type agent and a playable agent, which the user could control and accumulate a score.
Both the program and report were marked highly, achieving 80%. I am especially proud of this as my second year at university was affected by strikes, with the Java Programming module being affected the most heavily.
Below is the report submitted and the code can be found on my Github, here:
Java Programming: Report
In the program, there are three behaviours which determine the overall flocking characteristics of the agents exhibiting flocking behaviours; Cohesion, Separation and Alignment. These behaviours are controlled by several variable fields possessed by each flocking agent. The first is a constant for each, between 0 and 1 inclusive, which determines how strongly the characteristics of each agent are affected by each behaviour. Also, each agent has a radius inside which each agent can ‘see’ other agents and access their position and angle.
To produce the flocking behaviour, a flocking agent will first use the equation of a circle in order to check every other agent, besides itself, in a given array of agents to see if they lie within a circle centred at the agent calling the method, with a radius determined by that agent’s field. From this, an array is produced containing a copy of every agent within the circle.
If this array contains at least one agent, the mean average position and angle for every agent in the array is calculated. This is done simply by cycling through the array of found agents, accumulating the X coordinates, Y coordinates and Angles and dividing each by the size of the array of found agents. This division makes checking the array size important, as it avoids the program being asked to divide by zero.
The average position returned equates to the centre of mass of all visible agents. This is used to determine the angle required for the calling agent to turn and face the centre of mass; the angle of cohesion. From this, the angle of separation can be immediately calculated by adding 180 degrees to the cohesion angle. The angle of alignment can be calculated much more simply, by subtracting the agent’s own angle from the average angle of the visible flock.
The format of formulae for implementing each of the three behaviours is the same.
Where 𝛳 is the angle of the agent in question, Kc is a constant associated with the strength of a flocking behaviour and 𝛳c is the angle of movement associated with a flocking behaviour.
Because of this, each behaviour can be implemented using the same formula three times in sequence, with only the constant and angle being substituted in each use. This produces a final angle which is influenced by all three behaviours, with their constants dictating which behaviour(s) is exhibited most strongly.
This is the basic algorithm, though some extensions are present in my program. For example, there are several other types of agent, one of which is called a ‘Predator’. As I’m sure you can guess from the name, it acts as a predatory figure to which the flocking agents are prey. Predators create more complex movements, even though they use the separation, a pre-existing behaviour.
After the search within the radius is made, a check is made to see if any of the agents within the radius of the flocking agent are of the Predator class. If they are, their positions do not affect the centre of mass and the normal flocking characteristics are ignored. Instead, the angle of separation is calculated from the predator’s position and the separation constant is set to sharply separate.
In terms of the code, the first class to be written was the Agent class. For the basic class, the goal was to achieve functionality required for all preceding extensions and nothing more. This is because the agent class would never be instantiated, only extensions of it inside of a polymorphic arrayList.
Much like the labs, the basic class contains the essential fields like position, angle, speed and the relative size. The methods contained were also designed to be the necessities used by every extension of the class, providing functionality like updating the position, drawing the agent, turning the agent and wrapping the agent’s movements around the canvas.
As well as these basic methods, the Agent class also contains the searchRadius method, which implements the collation of a list of agents within a circle centred on the calling agent, as described previously. Even though the method is not used in every extension of Agent, it is used in the flockingAgents and PlayableAgent, making it worth placing in the Agent class to avoid duplicated code.
The agent class was first extended to implement the flocking algorithm. Following the algorithm outlined previously, the flockingAgent contains many short methods which implement each step of the algorithm, which are all called within one Flock method. This allows for improved readability, even without comments, as the method names in sequence provide a clear description of the algorithm steps.
As well as these, two other Agent extension variants are used, randomAgent and playableAgent. The former is much like a standard Agent but it overrides the existing method which updates the Agent’s position, to add randomised turns and not flock. The latter has its movements dictated by player keyboard input and uses the searchRadius method to generate and update a score for the player.
Furthermore, randomAgent is extended to produce the Predator class. This class has no special methods or fields of its own, it uses the random movements from its parent class while appearing distinct to the checkForPredator method. This allows for randomAgent and Predator to both move randomly without duplicate code and allowing only the randomAgent to affect the flocking of other agents while only the predator can delete other agents.
All these extensions and variants are stored within a package called Agent, to increase modularity of the code and ease of navigation. This is exceptionally useful when copying packages across projects, much like the basic drawing classes and utility classes were copied across in the labs. With this package, if somebody were to implement their own GUI but wanted to use the Agent classes I’d written, it’s incredibly easy to do so with little searching.
This package consists of methods primarily created with the guidance of the lab scripts, to simply the process of creating a canvas and drawing on it. The canvas class itself was provided and not my own work, my work consisted of the LineSegment class, the CartesianCoordinate class and its extension, the CoordinateWithAngle class.
The former two classes are very simple and include little to no methods for working with Cartesian Coordinates, they simply work as data structures. The latter class, CoordinateWithAngle, is similar but it extends CartesianCoordinate to include an extra field, a double to be used as an angle. The class was made for one purpose, for use in the calculateAverages method which calculates an average position and angle from a list of agents. The class allows for the average X coordinate, Y coordinate and angle to be stored together in one single data structure, allowing for simpler processing and reducing human error by simply reducing the amount of thought and typing required by having the averaging method take one less field.
Alongside the canvas class, the only part of the program not entirely my own is the utilities class from the labs which was copied across, placed into its own package and added to. Aside from the existing methods, it now includes a method which takes any number of degrees and returns the equivalent angle between -179 to 180 inclusive and a variation of randomInt which takes an integer as an upper boundary and returns random integers from the maximum to its inverse inclusive. Also included are two static final fields, acting as constants for use as the frame dimensions.
Entry Class and GUI
The lion’s share of the main class is dominated by setting up the GUI, with the game loop only actually spanning around 40 lines of code.
The GUI features four buttons, 6 labelled sliders and two individual labels. The buttons are used to start the game mode and initialise different types of agent, the sliders vary the speed of the simulation and the agent control parameters and the labels are used to display the current game state and player score.
Three frames are used with two layout types, BorderLayout and FlowLayout. The two are used in combination such that there is one main frame and in the two other panels are placed at the North and South sections of this panel. The two inner panels use the FlowLayout to arrange the buttons and sliders. The canvas is placed in the main panel also, in the Center.
Listeners for all the buttons and sliders reside inside the constructor of the entry class, too. This includes the key listener, used to control the player’s movements. Since listeners need to be attached to objects, to key listener is attached to the game start button, is it will always hold the focus at the time the player starts the game. Unfortunately, this causes the issue that if a player wishes to use any other buttons or sliders after the game has started, they will lose control over the playableAgent until they click back on the game start button.
Finally, the game loop resides in its own method and is called by the main constructor. The game loop has a simple structure, it undress every agent, checks if any are about to be ‘eaten’ by a predator, calculates the new position for every agent, removes any ‘eaten’ agents, redraws each agent, checks if the game has been lost and, finally, briefly pauses to give a frame rate of close to 1 / deltaTime which in my case is 1 / 20 = 50 frames per second.
In implementing the Agent class, there were some interesting challenges. The foremost being that, because of the way the canvas is implemented, coordinates on a canvas have their y values inverted. The Cartesian coordinate [0, 0] would ordinarily fall at the centre of a four-quadrant graph or the lower left corner of a one quadrant graph. However, the canvas places this point at the top right of the screen is it works like a one quadrant graph vertically inverted. I knew I had to use in-built trigonometric functions as part of my program, which means that this discrepancy must be accounted for.
This was combatted in several ways. The first correction came in the move method of the Agent class. In this method, the agent’s current position, current angle, and distance to be moved are used to calculate the agent’s final position. This is done by imagining a right-angled triangle where the distance to be moved is the hypotenuse and the agent’s movements make up the adjacent and opposite sides to reach the desired distance from the start point. Ordinarily, the X-axis movement would be dictated by the adjacent side and the Y-axis dictated by the opposite side, but this is reversed for this implementation of the canvas, correcting the agent’s movements.
In other places, such as calculating the average position of all agents within a radius, half of each axis width is subtracted from position values, to reference their point of origin to the centre of the canvas. This was easy to do across several methods and classes due to the use of public static final fields, which I’ve placed in the Utils class. This allows for them to be used across the entire program, much like constants. The same principle is used to prepare the average position of a seen flock for the calculation of the angle of cohesion.
The method wrapPosition also makes use of the final values controlling the canvas dimensions, by using them to control the point at which an agent will have gone offscreen and, therefore, will need to be undrawn at the boundary and redrawn at the opposing side. The method doesn’t just test an agent against the canvas boundaries, however. For aesthetic purposes, to avoid the user seeing agents instantly appear and disappear, a buffer is used. The agents are allowed to leave the screen by a set number of pixels, controlled by that boundary’s buffer value, and are redrawn offscreen in the same way. The value of the buffer changes for each boundary, as the buttons and sliders occlude parts of the canvas.
Another interesting piece of problem solving was having the Agent class recognise a predator by having each agent hold an empty Predator as a field. The predator is not drawn, there is an empty predator constructor written explicitly for this purpose. The predator field is used to compare with the class of any visible agent using the getClass method. The field’s only purpose is as a reference.
The checkForPredator method which implements is present in the flockingAgent class in this form, but an empty method of the same name resides in the Agent class. The empty method is in place so that in the main game loop, the method can be called for all agents, even in the case of some agent types, like randomAgents, which ignore predators anyway. This allows for a simpler call to the function as one line can be applied to all agent types and it will simply have no effect on agents which do not rely on the method.
This same technique is used for the Flock method. This is the method which implements the flocking algorithm. Even though only one type of agent uses this algorithm, all agents contain a method of this name. The difference is, again, that the flockingAgent has a method which overrides this empty method and implements the algorithm, allowing for all agents to call the method but only required agents experience any effect from it.
The GUI uses anonymous inner classes to allow for buttons, sliders and labels to be interactive. The sliders vary the parameters of the agents, bar one which modifies the simulation speed. All of these are labelled using a hashtable, with the labels being anchored to the middle value of each slider. Listeners for these sliders cycle through every agent and apply the new value to each of them using a for-loop.
The buttons are mostly used to add new instances of several types of agent, with their initial values being assigned the current values of the sliders. All agents are initialised at a random point and angle. The X and Y coordinates are generated using a random integer between 0 and 400 less than the screen dimension, with 200 then added back to the output integer. This generates a number further than 200 pixels away from any screen boundary, meaning they always appear visibly and with good space to move and be seen by the user before hitting a boundary. The playable agent, however, will always appear at the point [300,300] facing downwards. This causes it to appear roughly below the game start button which, when paired with its larger size, increases visibility to the user immediately.
The predator class is an extension of the randomAgent class. The point of this class is to randomly move around the canvas, ignoring flocking algorithms, and act as a randomAgent until met by a flockingAgent. If the flockingAgent comes within the kill radius of a predator type, the flockingAgent will disappear.
Since arrayLists cannot be modified concurrently if an agent needs removing because it’s been caught, a flag boolean will be set to true and a copy of the agent will be saved. An if statement is then triggered to search the arrayList for an agent matching the copy just made and removing it. This method makes sure that removal is never attempted on the same agent twice, causing an error, and that the removal never takes place whilst the arrayList is being cycled through.
As well as this, a game mode was added. This mode is triggered by the player pressing the game start button. This instantiates a playableAgent and sets a flag boolean to true, which is used to stop other agents being added once the game has been started. The score begins to accumulate based on how many agents are within the radius of the playable agents; at regular intervals, the score is incremented by this number. The player can use the controls to decide the parameters of other agents whilst the game is running and the game will run until a predator removes all the flocking agents present, if there are any. If there were never any flocking agents, the game will not stop. Once the game is over, the game loop stops and a label changes the text to ‘YOU LOST’.