44. Competing models? Deconstruct into building bricks?

Models are representations of theories. I write this as a modeller – someone who works on mathematical and computer models of cities and regions but who is also seriously interested in the underlying theories I am trying to represent. My field, relative say to physics, is underdeveloped. This means that we have a number of competing models and it is interesting to explore the basis of this and how to respond. There may be implications for other fields – even physics!

A starting conjecture is that there are two classes of competing models: (i) those that represent different underlying theories (or hypotheses); and (ii) those that stem from the modellers choosing different ways of making approximations in seeking to represent very complex systems. The two categories overlap of course. I will conjecture at the outset that most of the differences lie in the second (perhaps with one notable exception). So let’s get the first out of the way. Economists want individuals to maximise utility and firms to maximise profits – simplifying somewhat of course. They can probably find something that public services can maximise – health outcomes, exam results – indeed a whole range of performance indicators. There is now a recognition that for all sorts of reasons, the agents do not behave perfectly and way have been found to handle this. There is a whole host of (usually) micro-scale economic and social theory that is inadequately incorporated into models, in some cases because of the complexity issue – the niceties are approximated away; but in principle, that can be handled and should be. There is a broader principle lurking here: for most modelling purposes, the underlying theory can be seen as maximising or minimising something. So if we are uncomfortable with utility functions or economics more broadly, we can still try to represent behaviour in these terms – if only to have a base line from which behaviour deviates.

So what is the exception – another kind of dividing line which should perhaps have been a third category? At the pure end of a spectrum, ‘letting the data speak for themselves’. It is mathematics vs statistics; or econometrics vs mathematical economics. Statistical models look very different – at least at first sight – to mathematical models – and usually demand quite stringent conditions to be in place for their legitimate application. Perhaps, in the quantification of a field of study, statistical modelling comes first, followed by the mathematical? Of course there is a limit in which both ‘pictures’ can merge: many mathematical models, including the ones I work with, can be presented as maximum likelihood models. This is a thread that is not to be pursued further here, and I will focus on my own field on mathematical modelling.

There is perhaps a second high-level issue. It is sometimes argued that there are two kinds of mathematician: those that think in terms of algebra and those who think in terms of geometry. (I am in the algebra category which I am sure biases my approach.) As with many of these dichotomies, they should be removed and both perspectives fully integrated. But this is easier said than done!

How do the ‘approximations’ come about? I once tried to estimate the number of variables I would like to have for a comprehensive model of a city of 1M people and at a relatively coarse grain, the answer was around 1013! This demonstrates the need for approximation. The first steps can be categorised in terms of scale: first, spatial – referenced by zones of location rather than continuous space – and how large should the zones be? Second, temporal: continuous time or discrete? Third, sectoral: how many characteristics of individuals or organisations should be identified and at how fine a grain? Experience suggests that the use of discrete zones – and indeed other discrete definitions – makes the mathematics much easier to handle. Economists often use continuous space in their models, for example, and this forces them into another kind of approximation: monocentricity, which is hopelessly unrealistic. Many different models are simply based on different decisions about, and representations of, scale.

The second set of differences turn on focus of interest. One way of approximating is to consider a subsystem such as transport and the journey to work, or retail and the flow of revenues into a store or a shopping centre. The dangers here are the critical interdependencies are lost and this always has to be borne in mind. Consider the evaluation of new transport infrastructure for example. If this is based purely on a transport model, there is a danger than the cost-benefit analysis will be concentrated on time savings rather than the wider benefits. There is also a potentially higher-level view of focus. Lowry very perceptively once pointed out that models often focus on activities – and the distribution of activities across zones; or on the zones, in which case the focus would be on land use mix in a particular area. The trick, of course, is to capture both perspectives simultaneously – which is what Lowry achieved himself very elegantly but which has been achieved only rarely since.

A major bifurcation in model design turns on the time dimension and the related assumptions about dynamics. Models are much easier to handle if it is possible to make an assumption that the system being modelled is either in equilibrium or will return to a state of equilibrium quickly after a disturbance. There are many situations where the equilibrium assumption is pretty reasonable – for representing a cross-section in time or for short-run forecasting, for example, representing the way in which a transport system returns to equilibrium after a new network link or mode is introduced. But the big challenge is in the ‘slow dynamics’: modelling how cities evolve.

It is beyond the scope of this piece to review a wide range of examples. If there is a general lesson here it is that we should be tolerant of each others’ models, and we should be prepared to deconstruct them to facilitate comparison and perhaps to remove what appears to be competition but needn’t be. The deconstructed elements can then be seen as building bricks that can be assembled in a variety of ways. For example, ‘generalised cost’ in an entropy-maximising spatial interaction model can easily be interpreted as a utility function and therefore not in competition with economic models. Cellular automata models, and agent-based models are similarly based on different ‘pictures’ – different ways of making approximations. There are usually different strengths and weaknesses in the different alternatives. In many cases, with some effort, they can be integrated. From a mathematical point of view, deconstruction can offer new insights. We have, in effect, argued that model design involves making a series of decisions about scale, focus, theory, method and so on. What will emerge from this kind of thinking is that different kinds of representations – ‘pictures’ – have different sets of mathematical tools available for the model building. And some of these are easier to use than others, and so, when this is made explicit, might guide the decision process.

Alan Wilson

August 2016

Leave a Reply

Your email address will not be published. Required fields are marked *