Model design is one of the most important stages of the development of any agent-based model.  Get the design wrong and you might find yourself tied up in knots, battling against the structure of the project you defined months before.  Get it right and you’ll leave yourself a extensible platform to develop into, allowing yourself time to concentrate on refining your model structure.

During the development of an agent-based model during my PhD, I found that while many authors write about the application and implementation of ABM, each model is approached on a seemingly ad-hoc basis, with little reference to a unifying framework or starting point.  So I decided to develop my own.

The framework is divided into four sections, intended to be approached in the order in which they are presented.  At each section, further subsections demand questions of the modeller about how they might approach each facet of the design.  Only once the modeller has developed a strong understanding of each element (including deciding whether the factors are, in fact, relevant) of the design should they proceed with the development of the ABM.  The framework is intended to be generic, applicable across disciplines and language neutral (although elements naturally align with the structure of some of the ABM tools).

The framework is outlined below.  I’d be very interested to receive any feedback, positive or negative, or (even better) news of anyone applying this framework in the development of their agent-based model.

The Observer

The Observer refers everything that exists outside of the simulation environment. These are the constructs within which the simulation exists, including the theoretical constructs, in addition to the noted role of the modeller, and the environment within which the simulation will be developed. It is important that all aspects of design are considered prior to continuing with model development, and should ensure the modeller is taking the right steps in choosing to develop an ABM in lieu of other approaches.

In outlining the Observer, one should consider each of the following elements, answering each associated question, noting particularly where inherent uncertainty exists.

  • Mission Statement: What is the fundamental process one is seeking to model? Why is ABM being used to simulate this process? Do alternative models exist that ABM will demonstrate a significant advantage over?
  • Measuring Accuracy: How is the process under investigation ordinarily measured? Over what distribution and variance do these measurements lie? What are the accuracy of these existing measurements? In view of existing measures of the process, what outputs will be generated during the simulation process? How will the simulation outputs be related directly to real-world measures? What statistical tests will be carried out to measure model accuracy?
  • Software: Which software package will be used for the construction of the simulation? Can the model be developed using generic terms or is a modelling framework required? If so, which specific features are sought from a modelling framework? What are the modeller’s existing programming skill set? How feasible is an extension of this skill set within the time frame allowed for the development of this model?
  • Visualisation: Who is the audience for this model? Is the real-time visualisation of results important? What purpose does the visualisation hold? How much increased computation will the visualisation require (particularly where one is considering 3D animation)?
  • Bias: From what perspective is the model being developed? What is the background of the modeller? What are the natural assumptions being built into the model? How can the influence of modeller bias be removed from the approach, to the greatest extent possible?

The World

The World refers to the modelled environment within which the simulation will take place. The World does not only encompass the physical constraints of the model, but also the global rules that define the behaviour of all objects within it. This definition should only be made in relation to the specifications made during the Observer definitions, and not with respect to Agent design.

Once more, a number of design aspects must be considered at this stage, only within the context of those design decisions made during the Observer specification. Each design aspects again presents a number of questions that must be answered.

  • External Systems: Are there any interacting systems that interact with the process in question, but will not be modelled explicitly? What are the nature of these interactions? How will these external systems be encapsulated and represented within the model?
  • Space: Are spatial interactions important within the process being examined? If so, within what kind of space do these interactions occur – geographic, continuous, gridded, topological? Which data sets are required to aid in the definition of the space?
  • Time: Over what time period will the process be analysed? How long will one time step represent? Will a time step correspond to a real-world description of time?
  • Physical Rules: What physical rules shaping the actions of all within the World require explicit definition?

The Interactions

The specification of the Interactions does not refer to defining of agents themselves, but rather how Agent-to-Agent interaction occurs. These definitions refer to the physical and collective rule sets that organise interactions. It is the manifestation of these interactions within the simulation, that may form the representation of the process under examination, thus any definitions must be made within the contexts described during the Observer and World definitions.

The elements of Interactions design are as follows, with the relevant design questions outlined. References to higher level definitions are made where necessary consideration is required.

  • Physical Interactions: Do the agents physically interact across the space defined within the World? Under what circumstances do these interactions occur? What is the impact of these physical interactions? Is there a product of the interaction? How do such interactions impact upon the agents themselves or upon other external systems? Are there any specific social constraints governing these interactions? What are the temporal considerations of these interactions in relation to the time definitions made within the World? How are these interactions recorded?
  • Communication: Is there communication between agents? How do these communications occur? Through which medium? Are there any additional communications between agents and external systems? Are there any particular social rules governing communication? What are the temporal considerations of these interactions in relation to the time definitions made within the World? How are these interactions logged?
  • Resource Exchange: Do agents exchange resources in any way? How do these interaction occur? What is lost and what is gained by through each interaction? How do these interactions manifest themselves over space and time? How are these interactions recorded?

The Agent

The design of the Agent is all about capturing the characteristics, actions, and decisions that influence their interactions with other agents and the wider environment. It is ultimately these behaviours that shape how the overall process is modelled, and how it evolves over space and time. It is therefore vital that, only once each level within the design hierarchy is completely specified – once all higher level entities have been fully understood, broken down and incorporated within the model – should the modeller start explicitly considering the Agent design process. For only within the context of the prior specifications, outlining completely the environment and conditions in which the Agent exists, can an Agent be fully described.

Once one has defined this environment, the definition of the Interactions involved in the simulation should proceed naturally. During this final process, the following design considerations should be examined in detail.

  • Characteristics: What are the characteristics of the agents? Can agents be assigned to different profiles? What are the core traits that are required for specification of an agent’s actions? How do these properties vary across the population of agents?
  • Decisions: What decisions are made by the agent (or type of agent) during the course of the simulation? What information sources are used during the formation of this decision? Through which type of mechanism are these decisions formed? How long do decisions take to make? Are decisions formed in consultation with other agents over a framework of communication?
  • Actions: What actions does an agent, or type of agent, conduct during the simulation? How are these actions influenced by the agent’s characteristics? Are these actions directly relevant in simulating the process one wishes to examine? How are these manifested across the simulation space? How often do these actions take place relative to the temporal evolution of the simulation? Under what wider constraints (e.g. physical, moral framework) are these actions shaped?

‘Modelling Movement in the City: The Influence of Individuals’ was the title of a talk I gave at the AGILE conference in Avignon, France last week.  For the conference I actually initially prepared a poster that never ended up seeing the light of day – except for now that is.

The poster presents some recent work I carried out through agent-based simulation, demonstrating how different behavioural models influence the formation of macroscopic patterns.  As you can see from the results, the impact of mere basic assumptions hold a significant impact upon the unfolding network picture.

Probably now going to write this up as a journal paper, but hopefully putting the poster up here won’t mess with any copyright stuff – please let me know if it might!

Amanda Erickson put up a nice, simply visualisation of what life might be like in a future of driverless, automated cars. Check it out.

Two things sprang to mind while watching this – first, how terrifying this might be for a passenger in one of these cars, and second, haven’t I seen this sort of thing somewhere else before?

Well, yes, I showed the following video in a lecture last month as demonstration of self-organisation.  To me, the patterns look similar – at the higher level you see chaos, but when you observe the actions of individual’s there is usually a rational stream of thought behind the actions they are taking – normally to get to their exit road.  Judge for yourself.

I think the stark similarity seen between these two videos raise interesting questions about what we consider as progress in the urban realm.  Bare with me as I attempt to explain.

The driverless or automated car is often seen as the natural future of private transportation*, with one of its main benefits being the apparent offer of optimal organisation of traffic flows (e.g. no congestion).  And indeed when look at the first video, everything works and works well, perhaps even optimally.  But then you look at the second video, and you essentially have the same thing, created solely through the activity of individuals.

It is strange therefore that a fully optimised technical system is generally deemed necessary and superior.  When people are left to their own devices, to ‘sort it out between them’, people invariably do.  Traffic in Hanoi is not just the only example of this type of self-organisation – the Internet itself is a creation of human ingenuity.  Following Monderman’s ideas on Shared Space, perhaps all of these traffic regulations, signage and restrictions actually reduce our need to think about what we are doing.  They reduce and remove our ability or will to self-organise, and to the deficit of us all.

So why don’t ‘natural’ answers to technical problems receive a better press?  I suspect it is an issue of trust in the citizen.  That threat that one person may mess up, and mess it up for the rest of us.  Instead of facing the risk and accepting it as part of the solution, we surround ourselves with unnecessary and invasive mechanisms that carry out the task for us.  They may cost a lot of money and not be any better than our current solution, but they feel like progress.  It feels like things are getting better.  So, yes, perhaps automated cars are indeed a thing of the future.

As ever, very interested to hear your thoughts on this.

* I’ve personally never been so sure – mainly because of the safety element, and that fact that many people actually enjoy the process of driving…

 

I’ve just completed a lecture at the UCL Energy Institute on agent-based modelling and thought, hey – maybe some of my blog readership would be interested in this!

Please find the PDF below – it should be quite straightforward, although without the whizz-bang of the demonstrations and videos.  You can find the simulations I describe in the Model Library within NetLogo.

Enjoy, and please let me know if you have any questions.


At the upcoming AAG conference in New York, I’ll be presenting a recent prototype that links agent-based simulation with current traffic flow models.

The basic premise is that any cognitive decision associated with movement around cities should be modelled at the level of the individual.  However, it is not always necessary that all movement be represented individually.  Doing so potentially wastes limited computational power, especially important where modelling many complex agents.

Instead, my new simulation utilises traffic flow modelling to constrain the movement of individual agents.  Individuals choose where they move individually, but physical movement itself is modelled collectively.  The higher the traffic flow on a single route, the slower each agent on that route will travel.  This approach is more efficient and allows a much larger scale of complex agent-based simulation.

I’ll provide more detail at AAG next Sunday, but the basic result is as above.

The simulation demonstrates traffic flows across central London.  There are 30000 agents of varying behavioural characteristics moving around this space.  Their movement decisions impact on the state of the network.

KEY:  The redder colours represent high traffic saturation aka queues and congestion, the blues and greens represent quiet or free flowing traffic conditions.

 

Much of my work attempts to recreate the macro from the micro.  That is the explanation of large-scale effects through the examination of small-scale behaviours.  I look at how these develop over space and time.

So, more specifically, I look at how road congestion forms in cities and how we, as travellers, all contribute towards it.

As part of my early work on this stuff, I developed a simulation looking at how traveller decisions impact on the flow of traffic in adverse situations.  This consisted of the development of an Agent-based Model (ABM) using the Java-based Repast Simphony framework.  After a fair bit of faffing with Repast (which, I should add, is great although has a considerable learning curve in comparison to some ABM software), I have a model that demonstrates the impact of road closures across a population of driving agents.

The video below shows how the population of individually-cognating agents move from an area of origins (in green) to an area of destinations (in red) through London.  All of the agents move through geographic space, specifically an area around UCL in Euston.  So, this first video shows the normal situation, the next video will show how that changes once we mess things up a bit. (By the way, the video takes a few seconds to get moving, just allowing me a few seconds of in-lecture explanation).

Although the model is relatively simple in traffic simulation terms (with no traffic lights and regulations etc), I think it does show where concentrations of traffic form.  Particularly through the Euston Road/Tottenham Court Road junction.  So, what would happen if we closed this junction?  This…

I think it’s interesting to see the redistribution in traffic around the network.  Knowing that this junction is closed, you get a lot more movement along other roads suggesting that traffic would be considerably slower in these areas.  Clearly, the exact where’s and when’s in this scenario are some way of what reality might show.  Not only do we not have the impact of road regulations, but each individual holds a perfect knowledge of the network, proceeds towards their target along the shortest path and has prior knowledge of the closure ahead.  These are three important aspects I address in other pieces of work that I’ll put up later.  I also realise a bit of flow data would be quite useful here, but considering the pure conjecture of this scenario I’m not sure it’ll add much!