39: Abstract modes, generalised costs and constraints: exploring future scenarios

I have spent much of the last three years working on the Government Office for Science Foresight project on The future of cities. The focus was on a time horizon of fifty years into the future. It is clearly impossible to use urban models to forecast such long-term futures but it is possible in principle to explore systematically a variety of future scenarios. A key element of such scenarios is transport and we have to assume that what is on offer – in terms of modes of travel – will be very different to today – not least to meet sustainability criteria. The present dominance of car travel in many cities is likely to disappear. How, then, can we characterise possible future transport modes?

This takes me back to ideas that emerged in papers published 50 years ago (or in one case, almost that). In 1966 Dick Quandt and William Baumol, distinguished Princeton economists, published a paper in the Journal of Regional Science on ‘abstract transport modes’. Their argument was precisely that in the future, technological change would produce new modes: how could they be modelled? Their answer was to say that models should be calibrated not with modal parameters, but with parameters that related to the characteristics of modes. The calibrated results could then be used to model the take up of new modes that had new characteristics. By coincidence, Kelvin Lancaster, Columbia University economist, published a paper, also in 1966, in The Journal of Political Economy on ‘A new approach to consumer theory’ in which utility functions were defined in terms of the characteristics of goods rather than the goods themselves. He elaborated this in 1971 in his book ‘Consumer demand: a new approach’. In 1967, my ‘entropy’ paper was published in the journal Transportation Research and a concept used in this was that of ‘generalised cost’. This assumed that the cost of travelling by a mode was not just a money cost, but the weighted sum of different elements of (dis)utility: different kinds of time, comfort and so as well as money costs. The weights could be estimated as part of model calibration. David Boyce and Huw Williams in their magisterial history of transport modelling, ‘Forecasting urban travel’, wrote, quoting my 1967 paper, “impedance … may be measured as actual distance, as travel time, as cost, or more effectively as some weighted combination of such factors sometimes referred to as generalised cost……… In later publications, ‘impedance’ fell out of use in favour of ‘generalised cost’”. (They kindly attributed the introduction of ‘generalised cost’ to me.)

This all starts to come together. The Quandt and Baumol ‘abstract mode’ idea has always been in my mind and I was attracted to the Kelvin Lancaster argument for the same reasons – though that doesn’t seem to have taken off in a big way in economics. (I still have his 1971 book, purchased from Austicks in Leeds for £4-25.) I never quite connected ‘generalised cost’ to ‘abstract modes’. However, I certainly do now. When we have to look ahead to long-term future scenarios, it is potentially valuable to envisage new transport modes in generalised cost terms. By comparing one new mode with another, we can make an attempt – approximately because we are transporting current calibrated weights fifty years forward – to estimate the take up of modes by comparing generalised costs. I have not yet seen any systematic attempt to explore scenarios in this way and I think there is some straightforward work to be done – do-able in an undergraduate or master’s thesis!

We can also look at the broader questions of scenario development. Suppose for example, we want to explore the consequences of high density development around public transport hubs. These kinds of policies can be represented in our comprehensive models by constraints – and I argue that the idea of representing policies – or more broadly ‘knowledge’ – in constraints within models is another powerful tool. This also has its origins in a fifty year old paper – Jack Lowry’s ‘Model of metropolis’. In broad terms, this represents the fixing through plans of a model’s exogenous variables – but the idea of ‘constraints’ implies that there are circumstances where we might want to fix what we usually take as endogenous variables.

So we have the machinery for testing and evaluating long-term scenarios – albeit building on fifty year old ideas. It needs a combination of imagination – thinking what the future might look like – and analytical capabilities – ‘big modelling’. It’s all still to play for, but there are some interesting papers waiting to be written!!

Alan Wilson

April 2016

38. Truth is what we agree about?

I have always been interested in philosophy. I was interested in the big problems – the ‘What is life about?’ kind of thing with, as a special subject, ‘What is truth?’. How can we know whether something – a sentence, a theory, a mathematical formula – is true? And I guess because I was a mathematician and a physicist early in my career, I was particularly interested in the subset of this which is the philosophy of mathematics and the philosophy of science. I read a lot of Bertrand Russell – which perhaps seems rather quaint now. This had one nearer contemporary consequence. I was at the first meeting of the Vice-Chancellors of universities that became the Russell Group. There was a big argument about the name. We were meeting in the Russell Hotel and after much time had passed, I said something like ‘Why not call it the Russell Group?’ – citing not just the hotel but also Bertrand Russell as a mark of intellectual respectability. Such is the way that brands are born.

The maths and the science took me into Popper and the broader reaches of logical positivism. Time passed and I found myself a young university professor, working on mathematical models of cities, then the height of fashion. But fashions change and by the late 70s, on the back of distinguished works like David Harvey’s ‘Social justice and the city’, I found myself under sustained attack from a broadly Marxist front. ‘Positivism’ became a term of abuse and Marxism, in philosophical terms – or at least my then understanding of it, merged into the wider realms of structuralism. I was happy to come to understand that there were hidden (often power) structures to be revealed in social research that the models I was working on missed, therefore undermining the results.

This was serious stuff. I could reject some of the attacks in a straightforward way. There was a time when it was argued that anything mathematical was positivist and therefore bad and/or wrong. This could be rejected on the grounds that mathematics was a tool and that indeed there were distinguished Marxist mathematical economists such as Sraffa. But I had to dig deeper in order to understand. I read Marx, I read a lot of structuralists some of whom, at the time, were taking over English departments. I even gave a seminar in the Leeds English Department on structuralism!

In my reading, I stumbled on Jurgen Habermas and this provided a revelation for me. It took me back to questions about truth and provided a new way of answering them. In what follows, I am sure I oversimplify. His work is very rich in ideas, but I took a simple idea from it: truth is what we agree about. I say this to students now who are usually pretty shocked. But let’s unpick it. We can agree that 2 + 2 = 4. We can agree about the laws of physics – up to a point anyway – there are discoveries to be made that will refine these laws as has happened in the past. That also connects to another idea that I found useful in my toolkit: C. S. Peirce and the pragmatists. I will settle for the colloquial use of ‘pragmatism’: we can agree in a pragmatic sense that physics is true – and handle the refinements later. I would argue from my own experience that some social science is ‘true’ in the same way: much demography is true up to quite small errors – think of what actuaries do. But when we get to politics, we disagree. We are in a different ball park. We can still explore and seek to analyse and having the Habermas distinction in place helps us to understand arguments.

How does the ‘agreement’ come about? The technical term used by Habermas is ‘intersubjective communication’ and there has to be enough of it. In other words, the ‘agreement’ comes on the back of much discussion, debate and experiment. This fits very well with how science works. A sign of disagreement is when we hear that someone has an ‘opinion’ about an issue. This should be the signal for further exploration, discussion and debate rather than simply a ‘tennis match’ kind of argument.

Where does this leave us as social scientists? We are unlikely to have laws in the same way that physicists have laws but we have truths, even if they are temporary and approximate. We should recognise that research is a constant exploration in a context of mutual tolerance – our version of intersubjective communication. We should be suspicious of the newspaper article which begins ‘research shows that …..’ when the ‘research’ quoted is a single sample survey. We have to tread a line between offering knowledge and truth on the one hand and recognising the uncertainty of our offerings on the other. This is not easy in an environment where policy makers want to know what the evidence is, or what the ‘solution’ is, for pressing problems and would like to be more assertive than we might feel comfortable with. The nuances of language to be deployed in our reporting of research become critical.

Alan Wilson

April 2016

37: The ‘Leicester City’ phenomenon: aspirations in academia.

Followers of English football will be aware that the top tier is the Premier League and that the clubs that finish in the top four at the end of the season play in the European Champions’ League in the following year. These top four places are normally filled by four of a top half a dozen or so clubs – let’s say Manchester United, Manchester City, Arsenal, Chelsea, Tottenham Hotspur and Liverpool. There are one or two others on the fringe. This group does not include Leicester City. At Christmas 2014, Leicester were bottom of the Premier League with relegation looking inevitable. They won seven of their last nine games in that season and survived. At the beginning of the current (2015-16) season, the bookmakers’ odds on them winning the Premier League were 5000-1 against. At the time of writing, they top the league by eight points with four matches to play. The small number of people who might have bet £10 or more on them last August are now sitting on a potential fortune.

How has this been achieved? They have a very strong defence and so concede little; they can score ‘on the break’, notably through Jamie Vardy, a centre forward who not long ago was playing for Fleetwood Town in the nether reaches of English football; they have a thoughtful, experienced and cultured manager in Claudio Ranieri; and they work as a team. It is certainly a phenomenon and the bulk of the football-following population would now like to see them win the League.

What are the academic equivalents? There are university league tables and it is not difficult to identify a top half dozen. There are tables for departments and subjects. There is a ranking of journals. I don’t think there is an official league table of research groups but certainly some informal ones. As in football, it is very difficult to break into the top group from a long way below. Money follows success – as in the REF (the Research Excellence Framework) – and facilitates the transfer of the top players to the top group. So what is the ‘Leicester City’ strategy for an aspiring university, an ambitious department or research group, or a journal editor? The strong defence must be about having the basics in place – good REF ratings and so on. The goal-scoring break-out attacks is about ambition and risk taking. The ‘manager’ can inspire and aspire. And the team work: we are almost certainly not as good as we should be in academia, so food for thought there.

Then maybe all of the above requires at the core – and I’m sure Leicester City have these qualities – hard work, confidence, and good plans while still being creative; and a preparedness to be different – not to follow the fashion. So when The Times Higher has its ever-expanding annual awards, maybe they should add a ‘Leicester City Award’ for the university that matches their achievement in our own leagues. Meanwhile, will Leicester win the League? Almost all football followers in the country are now on their side. We will see in a month’s time!

Alan Wilson, April 2016

36: Block chains and urban analytics

I argued in an earlier piece that game-changing advances in urban analytics may well depend on technological change. One such possibility is the introduction of block chain software. a block is a set of accounts at a node. This is part of a network of many nodes many of which – in some cases all? – are connected. Transactions are recorded in the appropriate nodal accounts with varying degrees of openness. It is this openness that guarantees veracity and verifiability. This technology is considered to be a game changer in the financial world – with potential job losses because a block chain system excludes the ‘middlemen’ who normally record the transactions. Illustrations of the concept on the internet almost all rely on the bitcoin technology as the key example.

The ideas of ‘accounts at nodes’ and ‘transactions between nodes’ resonate very strongly with the core elements of urban analytics and models – the location of activities and spatial interaction. In the bitcoin case, the nodes are presumably account holders but it is no stretch of the imagination to imagine the nodes as being associated with spatial addresses. There must also be a connection to the ‘big data’ agenda and the ‘internet of things’. Much of the newly available real time data is transactional and one can imagine it being transmitted to blocks and used to update data bases on a continuous basis. This would have a very obvious role in applied retail modelling for example.

If this proved to be a mechanism for handling urban and regional data, and because a block chain is not owned by anyone, this could be a parallel internet for our research community.

This is a shorter than usual blog piece because to develop the idea, I need to do a lot more work!! The core concepts are complicated and not easy to follow. I have watched two You Tube videos – an excellent one from the Kahn Academy. I recommend these but what I would really like is someone to take on the challenge of (a) really understanding block chains and (b) thinking through possible urban analytics applications!

Alan Wilson

April  2016

35 Big data and high-speed analytics

My first experience of big data and high-speed analytics was at CERN and the Rutherford Lab over 50 years ago. I was in the Rutherford Lab part of a large distributed team working on a CERN bubble chamber experiment. There was a proton-proton collision every second or so which, for the charged particles, produced curved tracks in the chamber which were photographed from three different angles. The data from these tracks was recorded in something called the Hough-Powell device (after its inventors) in real time. This data was then turned into geometry; this geometry was then passed to my program. I was at the end of the chain and my job was to take the geometry, work out for this collision which of a number of possible events it actually was – the so-called kinematics analysis. This was done by chi-squared testing which seemed remarkably effective. The statistics of many events could then be computed, hopefully leading to the discovery of new (and anticipated) particles – in our case the Ω. In principle, the whole process for each event, through to identification, could be done in real time – though in practice, my part was done off-line. It was in the early days of big computers, in our case, the IBM 7094. I suspect now it will be all done in real time. Interestingly, in a diary I kept at the time, I recorded my immediate boss, John Burren, as remarking that ‘we could do this for the economy you know’!

So if we could do it then for quite a complicated problem, why don’t we do it now? Even well-known and well-developed models – transport and retail for example – typically take weeks or even months to calibrate, usually from a data set that refers to a point in time. We are progressing to a position at which, for these models, we could have the data base continually updated from data flowing from censors. (There is an intermediate processing point of course: to convert the sensor data to what is needed for model calibration.) This should be a feasible research challenge. What would have to be done? I guess the first step would be to establish data protocols so by the time the real data reached the model – the analytics platform, it was in some standard form. The concept of a platform is critical here. This would enable the user to select the analytical toolkit needed for a particular application. This could incorporate a whole spectrum from maps and other visualisation to the most sophisticated models – static and dynamic.

There are two possible guiding principles for the development of this kind of system: what is needed for the advance of the science, and what is needed for urban planning and policy development. In either case, we would start from an analysis of ‘need’ and thus evaluate what is available from the big data shopping list for a particular set of purposes – probably quite a small subset. There is a lesson in this alone: to think what we need data for rather than taking the items on the shopping list and asking what we can use them for.

Where do we start? The data requirements of various analytics procedures are pretty well known. There will be additions – for example incorporating new kinds of interaction from the Internet-of-Things world. This will be further developed in the upcoming blog piece on block chains.

So why don’t we do all this now? Essentially because the starting point – the first demo – is a big team job, and no funding council has been able to tackle something on this scale. There lies a major challenge. As I once titled a newspaper article: ‘A CERN for the social sciences’?

Alan Wilson

March 2016