Specialist maths schools – some facts

The news reports that the Government will try to promote more ‘specialist maths schools’ similar to the King’s College and Exeter schools.

The idea for these schools came when I read about Perelman, the Russian mathematician who in 2003 suddenly posted on arXiv a solution to the Poincaré Conjecture, one of the most important open problems in mathematics. Perelman went to one of the famous Russian specialist maths schools that were set up by one of the most important mathematicians of the 20th Century, Kolmogorov.

I thought – a) given the fall in standards in maths and physics because of the corruption of the curriculum and exams started by the Tories and continued by Blair, b) the way in which proper teaching of advanced maths and physics is increasingly limited to a tiny number of schools many of which are private, and c) the huge gains for our civilisation from the proper education of the unusual small fraction of children who are very gifted in maths and physics, why not try to set up something similar.

Gove’s team therefore pushed the idea through the DfE. Dean Acheson, US Secretary of State, said, ‘I have long been the advocate of the heretical view that, whatever political scientists might say, policy in this country is made, as often as not, by the necessity of finding something to say for an important figure committed to speak without a prearranged subject.’ This is quite true (it also explains a lot about how Monnet created the ECSC and EEC). Many things that the Gove team did relied on this. We prepared the maths school idea and waited our chance. Sure enough, the word came through from Downing Street – ‘the Chancellor needs an announcement for the Budget, something on science’. We gave them this, he announced it, and bureaucratic resistance was largely broken.

If interested in some details, then look at pages 75ff of my 2013 essay for useful links. Other countries have successfully pursued similar ideas, including France for a couple of centuries and Singapore recently.

One of the interesting aspects of trying to get them going was the way in which a) the official ‘education world’ loathed not just the idea but also the idea about the idea – they hated thinking about ‘very high ability’ and specialist teaching; b) when I visited maths departments they all knew about these schools because university departments in the West employ a large number of people who were educated in these schools but they all said ‘we can’t help you with this even though it’s a good idea because we’d be killed politically for supporting “elitism” [fingers doing quote marks in the air], good luck I hope you succeed but we’ll probably attack you on the record.’ They mostly did.

The only reason why the King’s project happened is because Alison Wolf made it a personal crusade to defeat all the entropic forces that elsewhere killed the idea (with the exception of Exeter). Without her it would have had no chance. I found few equivalents elsewhere and where I did they were smashed by their VCs.

A few points…

1) Kolmogorov-type schools are a particular thing. They undoubtedly work. But they are aimed at a small fraction of the population. Given what the products of these schools go on to contribute to human civilisation they are extraordinarily cheap. They are also often a refuge for children who have a terrible time in normal schools. If they were as different to normal kids in a negative sense as they are in a positive sense then there would be no argument about whether they have ‘special needs’.

2) Don’t believe the rubbish in things like Gladwell’s book about maths and IQ. There is now very good data on this particularly in the form of the unprecedented SMPY multi-decade study. Even a short crude test at 11-13 gives very good predictions of who is likely to be very good at maths/physics. Further there is a strong correlation between performance at the top 1% / 1:1,000 / 1:10,000 level and many outcomes in later life such as getting a doctorate, a patent, writing a paper in Science and Nature, high income, health etc. The education world has been ~100% committed to rejecting the science of this subject though this resistance is cracking.

This chart shows the SMPY results (maths ability at 13) for the top 1% of maths ability broken down into quartiles 1-4: the top quartile of the top 1% clearly outperforms viz tenure, publication and patent rates.  

screenshot-2017-01-23-11-53-01

3) The arguments for Kolmogorov schools do not translate to arguments for selection in general – ie. they are specific to the subject. It is the structure of maths and the nature of the brain that allows very young people to make rapid progress. These features are not there for English, history and so on. I am not wading into the grammar school argument on either side – I am just pointing out a fact that the arguments for such maths schools are clear but should not be confused with the wider arguments over selection that involve complicated trade-offs. People on both sides of the grammar debate should, if rational, be able to support this policy.

4) These schools are not ‘maths hot houses’. Kolmogorov took the children to see  Shakespeare plays, music and so on. It is important to note that teaching English and other subjects is normal – other than you are obviously dealing with unusually bright children. If these children are not in specialist schools, then the solution is a) specialist maths teaching (including help from university-level mathematicians) and b) keeping other aspects of their education normal. Arguably the greatest mathematician in the world, Terry Tao, had wise parents and enjoyed this combination. So it is of course possible to educate such children without specialist schools but the risks are higher that either parents or teachers cock it up.

5) Extended wisely across Britain they could have big benefits not just for those children and elite universities but they could also play an important role in raising standards generally in their area by being a focus for high quality empirical training. One of the worst aspects of the education world is the combination of low quality training and resistance to experiments. This has improved since the Gove reforms but the world of education research continues to be dominated by what Feynman called ‘cargo cult science’.

6) We also worked with a physicist at Cambridge, Professor Mark Warner, to set up a project to improve the quality of 6th form physics. This project has been a great success thanks to his extraordinary efforts and the enthusiasm of young Cambridge physicists. Thousands of questions have been answered on their online platform from many schools. This project gives kids the chance to learn proper problem solving – that is the core skill that the corruption of the exam system has devalued and increasingly pushed into a ghetto of of private education. Needless to say the education world also was hostile to this project. Anything that suggests that we can do much much better is generally hated by all elements of the bureaucracy, including even elements such as the Institute of Physics that supposedly exist to support exactly this. A handful of officials helped us push through projects like this and of course most of them have since left Whitehall in disgust, thus does the system protect itself against improvement while promoting the worst people.

7) This idea connects to a broader idea. Kids anywhere in the state system should be able to apply some form of voucher to buy high quality advanced teaching from outside their school for a wide range of serious subjects from music to physics.

8) One of the few projects that the Gove team tried and failed to get going was to break the grip of GCSEs on state schools (Cameron sided with Clegg and although we cheated a huge amount through the system we hit a wall on this project). It is extremely wasteful for the system and boring for many children for them to be focused on existing exams that do not develop serious skills. Maths already has the STEP paper. There should be equivalents in other subjects at age 16. There is nothing that the bureaucracy will fight harder than this and it will probably only happen if excellent private schools decide to do it themselves and political pressure then forces the Government to allow state schools to do them.

Any journalists who want to speak to people about this should try to speak to Dan Abramson (the head of the King’s school), Alison Wolf, or Alexander Borovik (a mathematician at Manchester University who attended one of these schools in Russia).

It is hopeful that No10 is backing this idea but of course they will face determined resistance. It will only happen if at least one special adviser in the DfE makes it a priority and has the support of No10 so officials know they might as well fight about other things…


This is the most interesting comment probably ever left on this blog and it is much more interesting than the blog itself so I have copied it below. It is made by Borovik, mentioned above, who attended one of these schools in Russia and knows many who attended similar…

‘There is one more aspect of (high level) selective specialist mathematics education that is unknown outside the professional community of mathematicians.

I am not an expert on “gifted and talented” education. On the other hand, I spent my life surrounded by people who got exclusive academically selective education in mathematics and physics, whether it was in the Lavrentiev School in Siberia, or Lycée Louis-le-Grand in Paris, or Fazekas in Budapest, or Galatasaray Lisesi (aka Lycée de Galatasaray) in Istanbul — the list can be continued.

The schools have nothing in common, with the exception of being unique, each one in its own way.

I had research collaborators and co-authors from each of the schools that Ilisted above. Why was it so easy for us to find a common language?

Well, the explanation can be found in the words of Stanislas Dehaene, the leading researcher of neurophysiology of mathematical thinking:

“We have to do mathematics using the brain which evolved 30 000 years ago for survival in the African savanna.”

In humans, the speed of totally controlled mental operations is at most 16 bits per second. Standard school maths education trains children to work at that speed.

The visual processing module in the brain crunches 10,000,000,000 bits per second.

I offer a simple thought experiment to the readers who have some knowledge of school level geometry.

Imagine that you are given a triangle; mentally rotate it about the longest side. What is the resulting solid of revolution? Describe it. And then try to reflect: where the answer came from?

The best kept secret of mathematics: it is done by subconsciousness.

Mathematics is a language for communication with subconsciousness.

There are four conversants in a conversation between two mathematicians: two people and two their “inner”, “intuitive” brains.

When mathematicians talk about mathematics face-to-face, they
* frequently use language which is very fluid and informal;
* improvised on the spot;
* includes pauses (for a lay observer—very strange and awkwardly timed) for absorbtion of thought;
* has almost nothing in common with standardised mathematics “in print”.

Mathematician is trying to convey a message from his “intuitive brain” directly to his colleagues’ “intuitive brain”.

Alumni of high level specialist mathematics schools are “birds of feather” because they have been initiated into this mode of communication at the most susceptible age, as teenagers, at the peak of intensity of their socialisation / shaping group identity stream of self-actualisation.

In that aspect, mathematics is not much different from arts. Part of the skills that children get in music schools, acting schools, dancing school, and art schools is the ability to talk about music, acting, dancing, art with intuitive, subconscious parts of their minds — and with their peers, in a secret language which is not recognised (and perhaps not even registered) by uninitiated.

However, specialist mathematics schools form a continuous spectrum from just ordinary, with standard syllabus, but good schools with good maths teachers to the likes of Louis-le-Grand and Fazekas. My comments apply mostly to the top end of the spectrum. I have a feeling that the Green Paper is less ambitious and does not call for setting up mathematics boarding schools using Chetham’s School of Music as a model. However, middle tier maths school could also be very useful — if they are set up with realistic expectations, properly supported, and have strong connections with universities.’

A Borovik

 

 

Please help: how to make a big improvement in the alignment of political parties’ incentives with the public interest?

I am interested in these questions:

1) What incentives drive good/bad behaviour for UK political parties?

2) How could they be changed (legal and non-legal) to align interests of existing parties better with the public interest?

3) If one were setting up a new party from scratch what principles could be established in order to align the party’s interests with the public interest much more effectively than is now the case anywhere in the world, and how could one attract candidates very different to those who now dominate Parliament (cleverer, quantitative problem-solving skills, experience in managing complex organisations etc)?

4) Is there a good case for banning political parties (as sometimes was attempted in ancient Greece), how to do it, what would replace them, why would this be better etc (I assume this is a bad and/or impractical idea but it’s worth asking why)?

5) In what ways do existing or plausible technologies affect these old questions?

What are the best things written on these problems?

What are the best examples around the world of how people have made big improvements?

Assume that financial resources are effectively unlimited for the entity trying to make these changes, let me worry about things like ‘would the public buy it’ etc – focus on policy not communication/PR advice.

The more specific the better: an ideal bit of help would be detailed draft legislation. I don’t expect anybody to produce this, but just to show what I mean…

The overall problem is: how to make government performance dramatically, quantifiably, and sustainably better?

Please leave ideas in comments or email dmc2.cummings@gmail.com

Thanks

D

Unrecognised simplicities of effective action #1: expertise and a quadrillion dollar business

‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.

Introduction

This series of blogs considers:

  • the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
  • the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
  • what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
  • how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
  • what tools and technologies can help decision-makers cope with complexity.

[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]

*

Summary of the big big problem

The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.

‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’

Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107  while its effects over ten years could be on the scale of ~108 – 10people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$10in roughly 15 years.

screenshot-2017-01-16-12-24-13

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)

In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.

Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.

Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)

What about the individuals and institutions that are supposed to cope with all this?

Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not  – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

*

What Is To be Done? There’s plenty of room at the top

‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.

Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:

Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.

Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.

Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.

The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.

The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.

The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’

The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.

Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’

The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!

Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:

‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.

Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.

‘We really don’t believe in buying into organisations to change them.’ Buffett.

If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.

Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.

This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?

This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.

Please leave comments and corrections…

* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.

Times op-ed: What Is To Be Done? An answer to Dean Acheson’s famous quip

On Tuesday 2 December, the Times ran an op-ed by me you can see HERE. It got cut slightly for space. Below is the original version that makes a few other points.

I will use this as a start of a new series on what can be done to improve the system including policy, institutions, and management.

NB1. The article is not about the election or party politics. My suggested answer to Acheson is, I think, powerful partly because it is something that could be agreed upon, in various dimensions, across the political spectrum. I left the DfE in January partly because I wanted to have nothing to do with the election and this piece should not be seen as advocating ‘something Tories should say for the election’. I do not think any of the three leaders are interested in or could usefully pursue this goal – I am suggesting something for the future when they are all gone, and they could quite easily all be gone by summer 2016.

NB2. My view is not – ‘public bad, private good’. As I explained in The Hollow Men II, a much more accurate and interesting distinction is between a) large elements of state bureaucracies, dreadful NGOs like the CBI, and many large companies (that have many of the same HR and incentive problems as bureaucracies), where very similar types rise to power because the incentives encourage political skills rather than problem-solving skills, and b) start-ups, where entrepreneurs and technically trained problem-solvers can create organisations that operate extremely differently, move extremely fast, create huge value, and so on.

(For a great insight into start-up world I recommend two books. 1. Peter Thiel’s new book ‘Zero To One‘. 2. An older book telling the story of a mid-90s start-up that was embroiled in the Netscape/Microsoft battle and ended up selling itself to the much better organised Bill Gates – ‘High Stakes, No Prisoners‘ by Charles Ferguson. This blog, Creators and Rulers, by physicist Steve Hsu also summarises some crucial issues excellently.)

Some parts of government can work like start-ups but the rest of the system tries to smother them. For example, DARPA (originally ARPA) was set up as part of the US panic about Sputnik. It operates on very different principles from the rest of the Pentagon’s R&D system. Because it is organised differently, it has repeatedly produced revolutionary breakthroughs (e.g. the internet) despite a relatively tiny budget. But also note – DARPA has been around for decades and its operating principles are clear but nobody else has managed to create an equivalent (openly at least). Also note that despite its track record, D.C. vultures constantly circle trying to make it conform to the normal rules or otherwise clip its wings. (Another interesting case study would be the alternative paths taken by a) the US government developing computers with one genius mathematician, von Neumann, post-1945 (a lot of ‘start-up’ culture) and b) the UK government’s awful decisions in the same field with another genius mathematician, Turing, post-1945.)

When I talk about new and different institutions below, this is one of the things I mean. I will write a separate blog just on DARPA but I think there are two clear action points:

1. We should create a civilian version of DARPA aimed at high-risk/high-impact breakthroughs in areas like energy science and other fundamental areas such as quantum information and computing that clearly have world-changing potential. For it to work, it would have to operate outside all existing Whitehall HR rules, EU procurement rules and so on – otherwise it would be as dysfunctional as the rest of the system (defence procurement is in a much worse state than the DfE, hence, for example, billions spent on aircraft carriers that in classified war-games cannot be deployed to warzones). We could easily afford this if we could prioritise – UK politicians spend far more than DARPA’s budget on gimmicks every year – and it would provide huge value with cascading effects through universities and businesses.

2. The lessons of why and how it works – such as incentivising goals, not micromanaging methods – have general application that are useful when we think generally about Whitehall reform.

Finally, government institutions also operate to exclude from power scientists, mathematicians, and people from the start-up world – the Creators, in Hsu’s term. We need to think very hard about how to use their very rare and valuable skills as a counterweight to the inevitable psychological type that politics will always tend to promote.

Please leave comments, corrections etc below.

DC


 

What Is to Be Done?

There is growing and justified contempt for Westminster. Number Ten has become a tragi-comic press office with the prime minister acting as Über Pundit. Cameron, Miliband, and Clegg see only the news’s flickering shadows on their cave wall – they cannot see the real world behind them. As they watch floundering MPs, officials know they will stay in charge regardless of an election that won’t significantly change Britain’s trajectory.

Our institutions failed pre-1914, pre-1939, and with Europe. They are now failing to deal with a combination of debts, bad public services, security threats, and profound transitions in geopolitics, economics, and technology. They fail in crises because they are programmed to fail. The public knows we need to reorient national policy and reform these institutions. How?

First, we need a new goal. In 1962, Dean Acheson quipped that Britain had failed to find a post-imperial role. The romantic pursuit of ‘the special relationship’ and the deluded pursuit of a leading EU role have failed. This role should focus on making Britain the best country for education and science. Pericles described Athens as ‘the school of Greece’: we could be the school of the world because this role depends on thought and organisation, not size.

This would give us a central role in tackling humanity’s biggest problems and shaping the new institutions, displacing the EU and UN, that will emerge as the world makes painful transitions in coming decades. It would provide a focus for financial priorities and Whitehall’s urgent organisational surgery. It’s a goal that could mobilise very large efforts across political divisions as the pursuit of knowledge is an extremely powerful motive.

Second, we must train aspirant leaders very differently so they have basic quantitative skills and experience of managing complex projects. We should stop selecting leaders from a subset of Oxbridge egomaniacs with a humanities degree and a spell as spin doctor.

In 2012, Fields Medallist Tim Gowers sketched a ‘maths for presidents’ course to teach 16-18 year-olds crucial maths skills, including probability and statistics, that can help solve real problems. It starts next year. [NB. The DfE funded MEI to turn this blog into a real course.] A version should be developed for MPs and officials. (A similar ‘Physics for Presidents‘ course has been a smash hit at Berkeley.) Similarly, pioneering work by Philip Tetlock on ‘The Good Judgement Project‘ has shown that training can reduce common cognitive errors and can sharply improve the quality of political predictions, hitherto characterised by great self-confidence and constant failure.

New interdisciplinary degrees such as ‘World history and maths for presidents’ would improve on PPE but theory isn’t enough. If we want leaders to make good decisions amid huge complexity, and learn how to build great teams, then we should send them to learn from people who’ve proved they can do it. Instead of long summer holidays, embed aspirant leaders with Larry Page or James Dyson so they can experience successful leadership.

Third, because better training can only do so much, we must open political institutions to people and ideas from outside SW1.

A few people prove able repeatedly to solve hard problems in theoretical and practical fields, creating important new ideas and huge value. Whitehall and Westminster operate to exclude them from influence. Instead, they tend to promote hacks and apparatchiks and incentivise psychopathic narcissism and bureaucratic infighting skills – not the pursuit of the public interest.

How to open up the system? First, a Prime Minister should be able to appoint Secretaries of State from outside Parliament. [How? A quick and dirty solution would be: a) shove them in the Lords, b) give Lords ministers ‘rights of audience’ in the Commons, c) strengthen the Select Committee system.]

Second, the 150 year experiment with a permanent civil service should end and Whitehall must open to outsiders. The role of Permanent Secretary should go and ministers should appoint departmental chief executives so they are really responsible for policy and implementation. Expertise should be brought in as needed with no restrictions from the destructive civil service ‘human resources’ system that programmes government to fail. Mass collaborations are revolutionising science [cf. Michael Nielsen’s brilliant book]; they could revolutionise policy. Real openness would bring urgent focus to Whitehall’s disastrous lack of skills in basic functions such as budgeting, contracts, procurement, legal advice, and project management.

Third, Whitehall’s functions should be amputated. The Department for Education improved as Gove shrank it. Other departments would benefit from extreme focus, simplification, and firing thousands of overpaid people. If the bureaucracy ceases to be ‘permanent’, it can adapt quickly. Instead of obsessing on process, distorting targets, and micromanaging methods, it could shift to incentivising goals and decentralising methods.

Fourth, existing legal relationships with the EU and ECHR must change. They are incompatible with democratic and effective government

Fifth, Number Ten must be reoriented from ‘government by punditry’ to a focus on the operational planning and project management needed to convert priorities to reality over months and years.

Technological changes such as genetic engineering and machine intelligence are bringing revolution. It would be better to undertake it than undergo it.

 

 

Low quality journalism from Prospect on the sensitive subject of genes and IQ

Prospect has published a big piece on genes that also goes into the controversy surrounding my essay last year (non-paywall version HERE). The author is someone called Philip Ball.

It is not as misleading as much media coverage was. After all, Polly Toynbee wrote ‘wealth is more heritable than genes’ and the Guardian put it in the headline even though it is pure gobbledegook (the word ‘heritable’ has a technical meaning that renders Polly’s argument meaningless). Even a genuine expert, Professor Steve Jones, made the unfortunate mistake of believing what he read in the media and had to retract comments.

However, the Prospect piece is substantially misleading. It is unprofessional journalism, riddled with errors, on a subject that senior people at Prospect ought to take seriously, given the proven potential for such articles to cause trouble on such a sensitive subject.

As an actual expert on this field (@StuartJRitchie) tweeted after reading it, it’s ‘one of those articles proving that a small amount of genetics knowledge is dangerous’.

A few examples regarding me…

The author writes:

‘A real problem with Cummings’ comments was not that they attribute some of our characteristics to our genes but that they gave the impression of genetics as a fait accompli – if you don’t have the right genes, nothing much will help. This goes against the now accepted consensus that genes exert their effects in interaction with their environment. While IQ is often quoted as being about 50% inheritable, the association with genetics much weaker in children from poor backgrounds: good genes won’t help you much if the circumstances are against it.’

In fact, I explicitly argued against the ‘impression’ he asserts I gave and discuss the lower heritability numbers for poorer children. The implication that I oppose the view that ‘genes exert their effects in interaction with their environment’ is simply ludicrous.

He writes, ‘But if he [Cummings] were to look a little more deeply into what it has already discovered (and sometimes un-discovered again), he might wonder what it offers education policy.’ He then discusses the issue of ‘false positives’ – which I discussed.

He then writes, ‘So it’s not clear, pace Cummings, what this kind of study adds to the conventional view that some kids are more academically able than others. It’s not clear why it should alter the goal of helping all children achieve what they can, to the best of their ability.’

I not only did not make the argument he implies I did – i.e. we should ‘alter the goal of helping all children…’ – I actually explicitly argued that this would be the WRONG conclusion!

He also makes errors in the bits that do not discuss me but I’ll leave experts to answer those.

It is hard to decide whether the author is being dishonest or incompetent. I strongly suspect that like many other journalists, Ball did not read my essay but only other media coverage of it.

Either way, Prospect should do a much better job on such sensitive subjects if it wants to brand itself as ‘the leading magazine of ideas’.

If Ball or anybody else at Prospect wants to understand the errors regarding my essay in detail, then look at THIS LINK between pages 49-51, 72-74, 194-203.

Prospect should insist that the author removes the factually wrong assertions that Ball makes regarding my essay as they will otherwise ripple on through other pieces, as previously wrong pieces have rippled into Ball’s.

For any hacks reading this, please note – the world’s foremost expert on the subject of IQ/genes is Professor Robert Plomin and he has stated on the record that in my essay I summarised the state of our scientific knowledge in this field accurately. This knowledge is uncomfortable for many but that is all the more reason for publications such as Prospect to tread carefully – my advice to them would be ‘do not publish journalism on this subject without having it checked by a genuine expert’.

If you want to understand the cutting edge of thinking on this subject, then do not read my essay but read this recent paper by Steve Hsu, a physics professor who is also working with BGI on large scale scans of the genome to discover the genes which account for a significant fraction of the total population variation in g/IQ: ‘On the genetic architecture of intelligence and other quantitative traits‘. Hsu is continuing the long tradition of mathematicians and physicists invading other spheres and bringing advanced mathematical tools that take time to percolate (cf. his recent paper ‘Applying compressed sensing to genome-wide association studies‘ which applies very advanced maths used in physics to genetic studies).

Or call Plomin, he’s at King’s. Do not trust Prospect on such issues unless there is evidence of a more scientific attitude from them.


 

UPDATE. Ball has replied to this blog HERE. His blog makes clear that he actually decided to go through my essay after reading this blog, not before writing his piece. He wriggles around a semi-admission of a cockup with ‘The point here is not that Cummings doesn’t want all children to achieve what they can – I genuinely believe he does want that’  – why did you imply the opposite then? – instead of simply apologising for his wrong claim.

He also makes a reference to ‘Gove’s expenses’ – something that has zero to do with the subject in any way. It is generally fruitless to comment on people’s motives so I won’t speculate on why he chucks this in.

Overall, he doesn’t quite admit he boobed in claiming I made various arguments when I actually said the opposite. He ignores his errors or obfuscates and introduces new errors.

For example, he quotes a paper ‘by a professor of education’ (NB. Ball, this does not make it sound more authoritative) saying, ‘Social class remains the strongest predictor of educational achievement in the UK.’

Ball says this view is ‘fairly well established’. There is no doubt that this represents the conventional wisdom of MPs, civil servants, journalists, and academics in fields such as sociology and education.

It is not, however, true.

‘General cognitive ability (g) predicts key social outcomes such as educational and occupational levels far better than any other trait.’ This is from the gold standard textbook, Behavioral Genetics by Robert Plomin (p. 186). This is not exactly surprising in itself, but it is an important point given much elite debate is based on assuming the opposite.

Ball – to see the point, ask yourself this… Look at a standard family, husband / wife / two kids. One child goes on to be a professor of physics, his brother goes on to dig ditches. They have the same social class. Why the difference? Social class is useless in explaining this because the kids share social class. This does not mean that ‘class is irrelevant’ but that its predictive power is limited, and g/IQ has stronger predictive power. (NB. everything about heritability involves population statistics, not individuals – to put the point crudely, if you smash an individual over the head with a bat, the effect of genes on performance will fall to zero, hence the unsurprising but important finding that heritability estimates are lower for very deprived children.) There is a vast literature on all this and my essay has a lot of references / links. E.g. this recent Plomin paper HERE.

One of the problems in discussions of this subject is that journalists are programmed to quote sociologists and ‘professors of education’ who often have no understanding of genetics and, often, none of the mathematical training required to understand the technical concepts.

So some further free advice to Ball and his editors at Prospect – do not rely on sociologists and ‘professors of education’ when it comes to issues like ‘social mobility’ – in my experience they are almost never even aware of the established findings in genetics. As Plomin says, ‘There is a wide gap between what laypeople (including scientists in other fields) believe and what experts believe’ (p.187).

Ball then quotes from my essay: ‘Raising school performance of poorer children is an inherently worthwhile thing to try to do but it would not necessarily lower parent-offspring correlations (nor change heritability estimates). When people look at the gaps between rich and poor children that already exist at a young age (3-5), they almost universally assume that these differences are because of environmental reasons (‘privileges of wealth’) and ignore genetics.’

And Ball comments: ‘So what is Cummings implying here, if not that the differences in school performance between rich and poor children might be, at least in large part, genetic? That the poor are, in other words, a genetic underclass as far as academic achievement is concerned – that they are poor presumably because they are not very bright?…  Cummings does not say that we should give up on the poor simply because they are genetically disadvantaged in the IQ stakes – but comments like the one above surely give a message that neither better education nor less social disadvantage will make an awful lot of difference to academic outcomes.’

Ah, so after claiming that I said X when I actually said ‘not X’, Ball clutches at the the old ‘you believe in a genetic underclass’ gag! He still has not read what I wrote about the ability of schools to improve radically and he misses the point about what the first part of my quote means. I was making the point that Plomin made to the Commons Education Committee (though I do not think they understood what he meant) – if you improve the education system such that poorer children get better schooling (as we should do), you are reducing environmental reasons for the variation in performance, and therefore if you imagine a perfect school system (other things being equal) heritability would rise because if you remove environmental factors then the remaining genetic factors would grow in importance. This is a counterintuitive conclusion and the first time Plomin explained it to me I had to ask a few dumb questions to see whether I understood the point properly. I can see why Ball would miss the point and I should have expressed it better by simply quoting Plomin.

On the issue of the search for the genes accountable for the population variation in g/IQ, Ball seems unaware of various aspects of current scholarship, e.g. the search for genes associated with height. If he reads the Hsu paper linked above, he will see what I mean.

This tedious exchange is even more of a waste of time than usual because the real science has become so clear. As Plomin says, the GWAS are the ‘beginning of the end’ of the long argument about ‘nature v nurture’ because ‘it is much more difficult to dispute results based on DNA data than it is to quibble about twin and adoptee studies’ (emphasis added). In 2011, a GWAS confirmed the rough numbers from the twin/adoption studies for IQ (‘Genome-wide association studies establish that human intelligence is highly heritable and polygenic’, Nature, October 2011). This will eventually sink in but this field is an interesting example of how the more educated people are the more likely they are to believe false ideas than uneducated people are.

Contra the claims of Ball and others, I have never argued that there is some link between understanding genes/IQ and ‘writing off’ people as a ‘genetic underclass’. If these people actually read what I wrote instead of relying on other hacks’ wrong stories, they would see I made the opposite argument:

‘Far from being a reason to be pessimistic, or to think that ‘schools and education don’t matter, nature will out’, the scientific exploration of intelligence and learning is not only a good in itself but will help us design education policy more wisely (it may motivate people to spend more on the education of the less fortunate). One can both listen to basic science on genetics and regard as a priority the improvement of state schools; accepting we are evolved creatures does not mean ‘giving up on the less fortunate’ (the fear of some on the Left) or ‘giving up on personal responsibility’ (the fear of some on the Right).’ (From my essay, p. 74.)

Next time, Ball, do your research BEFORE you write your column – and leave out dumb comments about ‘Gove’s expenses’ that are more suitable for a dopey spin doctor than a ‘science writer’. And Prospect – raise your game if you’re going to brand yourself ‘the leading magazine of ideas’!


UPDATE (17/11). Interestingly, the prominent Socialist Workers Party supporter Michael Rosen has written a comment below Ball’s blog. It is bilge – totally irreconcilable with established findings in behavioural genetics. As Stuart Richie, an actual expert on genetics, wrote, Rosen’s comment ‘is one of the most poorly-informed things I’ve ever read on IQ.’

Ball replied to Rosen,  ‘I agree completely with your comments on traditionally limited views of what intelligence is, and how to nurture it. So thanks for that.’

So Ball takes seriously comments by Rosen that are spectacularly ill-informed. How seriously should we take Ball as ‘a science writer’ on this subject?

Hsu also points out in comments the issue about finding ‘causal variants’ for polygenetic traits such as IQ or height – something it seems clear Ball did not research before writing his misconceived piece.

As S Richie wrote to Ball, ‘It’s a shame that you didn’t properly research this area before stating a tentative, unclear, and possibly nation-dependent finding from a single, small study as absolute fact. Perhaps this sort of sloppiness is one reason people familiar with the science get ‘touchy’ when they read your articles.’

In a further blog, HERE, Ball goes down another rabbit hole. He does not even try to answer the points I make above re his obvious errors. S Richie explains underneath the blog how Ball has introduced even more errors.

Prospect has no credibility in this area if it stands by such sloppy work, and Ball should reflect on the ethics of making claims about what people think that are 180 degrees off what they actually say – but it doesn’t look like he will. Time to re-read Feynman’s famous speech on ‘Cargo Cult Science’, Ball…

Complexity, ‘fog and moonlight’, prediction, and politics III – von Neumann and economics as a science

The two previous blogs in this series were:

Part I HERE.

Part II HERE.

All page references unless otherwise stated are to my essay, HERE.

Since the financial crisis, there has been a great deal of media and Westminster discussion about why so few people predicted it and what the problems are with economics and financial theory.

Absent from most of this discussion is the history of the subject and its intellectual origins. Economics is clearly a vital area of prediction for people in politics. I therefore will explore some intellectual history to provide context for contemporary discussions about ‘what is wrong with economics and what should be done about it’.

*

It has often been argued that the ‘complexity’ of human behaviour renders precise mathematical treatment of economics impossible, or that the undoubted errors of modern economics in applying the tools of mathematical physics are evidence of the irredeemable hopelessness of the goal.

For example, Kant wrote in Critique of Judgement:

‘For it is quite certain that in terms of merely mechanical principles of nature we cannot even adequately become familiar with, much less explain, organized beings and how they are internally possible. So certain is this that we may boldly state that it is absurd for human beings even to attempt it, or to hope that perhaps some day another Newton might arise who would explain to us, in terms of natural laws unordered by any intention, how even a mere blade of grass is produced. Rather, we must absolutely deny that human beings have such insight.’

In the middle of the 20th Century, one of the great minds of the century turned to this question. John Von Neumann was one of the leading mathematicians of the 20th Century. He was also a major contributor to the mathematisation of quantum mechanics, created the field of ‘quantum logic’ (1936), worked as a consultant to the Manhattan Project and other wartime technological projects, and was one of the two most important creators of modern computer science and artificial intelligence (with Turing) which he developed partly for immediate problems he was working on (e.g. the hydrogen bomb and ICBMs) and partly to probe the general field of understanding complex nonlinear systems.  In an Endnote of my essay I discuss some of these things.

Von Neumann was regarded as an extraordinary phenomenon even by  the cleverest people in the world. The Nobel-winning physicist and mathematician Wigner said of von Neumann:

‘I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci von Neumann. I have often remarked this in the presence of those men and no one ever disputed me… Perhaps the consciousness of animals is more shadowy than ours and perhaps their perceptions are always dreamlike. On the opposite side, whenever I talked with the sharpest intellect whom I have known – with von Neumann – I always had the impression that only he was fully awake, that I was halfway in a dream.’

Von Neumann also had a big impact on economics. During breaks from pressing wartime business, he wrote ‘Theory of Games and Economic Behaviour’ (TGEB) with Morgenstern. This practically created the field of ‘game theory’ which one sees so many references to now. TGEB was one of the most influential books ever written on economics. (The movie The Beautiful Mind gave a false impression of Nash’s contribution.) In the Introduction, his explanation of some foundational issues concerning economics, mathematics, and prediction is clearer for non-specialists than any other thing I have seen on the subject and cuts through a vast amount of contemporary discussion which fogs the issues.

This documentary on von Neumann is also interesting:

*

There are some snippets from pre-20th Century figures explaining concepts in terms recognisable through the prism of Game Theory. For example, Ampère wrote ‘Considerations sur la théorie mathématique du jeu’ in 1802 and credited Buffon’s 1777 essay on ‘moral arithmetic’ (Buffon figured out many elements that Darwin would later harmonise in his theory of evolution). Cournot discussed what would later be described as a specific example of a ‘Nash equilibrium’ viz duopoly in 1838.  The French mathematician Emile Borel also made contributions to early ideas.

However, Game Theory really was born with von Neumann. In December 1926, he presented the paper ‘Zur Theorie der Gesellschaftsspiele’ (On the Theory of Parlour Games, published in 1928, translated version here) while working on the Hilbert Programme [cf. Endnote on Computing] and quantum mechanics. The connection between the Hilbert Programme and the intellectual origins of Game Theory can perhaps first be traced in a 1912 lecture by one of the world’s leading mathematicians and founders of modern set theory, Zermelo, titled ‘On the Application of Set Theory to Chess’ which stated of its purpose:

‘… it is not dealing with the practical method for games, but rather is simply giving an answer to the following question: can the value of a particular feasible position in a game for one of the players be mathematically and objectively decided, or can it at least be defined without resorting to more subjective psychological concepts?’

He presented a theorem that chess is strictly determined: that is, either (i) white can force a win, or (ii) black can force a win, or (iii) both sides can force at least a draw. Which of these is the actual solution to chess remains unknown. (Cf. ‘Zermelo and the Early History of Game Theory’, by Schwalbe & Walker (1997), which argues that modern scholarship is full of errors about this paper. According to Leonard (2006), Zermelo’s paper was part of a general interest in the game of chess among intellectuals in the first third of the 20th century. Lasker (world chess champion 1897–1921) knew Zermelo and both were taught by Hilbert.)

Von Neumman later wrote:

‘[I]f the theory of Chess were really fully known there would be nothing left to play.  The theory would show which of the three possibilities … actually holds, and accordingly the play would be decided before it starts…  But our proof, which guarantees the validity of one (and only one) of these three alternatives, gives no practically usable method to determine the true one. This relative, human difficulty necessitates the use of those incomplete, heuristic methods of playing, which constitute ‘good’ Chess; and without it there would be no element of ‘struggle’ and ‘surprise’ in that game.’ (p.125)

Elsewhere, he said:

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Von Neumman’s 1928 paper proved that there is a rational solution to every two-person zero-sum game. That is, in a rigorously defined game with precise payoffs, there is a mathematically rational strategy for both sides – an outcome which both parties cannot hope to improve upon. This introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Zero-sum games are those where the payoffs ‘sum’ to zero. For example, chess or Go are zero-sum games because the gain (+1) and the loss (-1) sum to zero; one person’s win is another’s loss. The famous Prisoners’ Dilemma is a non-zero-sum game because the payoffs do not sum to zero: it is possible for both players to make gains. In some games the payoffs to the players are symmetrical (e.g. Prisoners’ Dilemma); in others, the payoffs are asymmetrical (e.g. the Dictator or Ultimatum games). Sometimes the strategies can be completely stated without the need for probabilities (‘pure’ strategies); sometimes, probabilities have to be assigned for particular actions (‘mixed’ strategies).

While the optimal minimax strategy might be a ‘pure’ strategy, von Neumann showed it would often have to be a ‘mixed strategy’ and this means a spontaneous return of probability, even if the game itself does not involve probability.

‘Although … chance was eliminated from the games of strategy under consideration (by introducing expected values and eliminating ‘draws’), it has now made a spontaneous reappearance. Even if the rules of the game do not contain any elements of ‘hazard’ … in specifying the rules of behaviour for the players it becomes imperative to reconsider the element of ‘hazard’. The dependence on chance (the ‘statistical’ element) is such an intrinsic part of the game itself (if not of the world) that there is no need to introduce it artificially by way of the rules of the game itself: even if the formal rules contain no trace of it, it still will assert itself.’

In 1932, he gave a lecture titled ‘On Certain Equations of Economics and A Generalization of Brouwer’s Fixed-Point Theorem’. It was published in German in 1938 but not in English until 1945 when it was published as ‘A Model of General Economic Equilibrium’. This paper developed what is sometimes called von Neumann’s Expanding Economic Model and has been described as the most influential article in mathematical economics. It introduced the use of ‘fixed-point theorems’. (Brouwer’s ‘fixed point theorem’ in topology proved that, in crude terms, if you lay a map of the US on the ground anywhere in the US, one point on the map will lie precisely over the point it represents on the ground beneath.)

‘The mathematical proof is possible only by means of a generalisation of Brouwer’s Fix-Point Theorem, i.e. by the use of very fundamental topological facts… The connection with topology may be very surprising at first, but the author thinks that it is natural in problems of this kind. The immediate reason for this is the occurrence of a certain ‘minimum-maximum’ problem… It is closely related to another problem occurring in the theory of games.’

Von Neumann’s application of this topological proof to economics was very influential in post-war mathematical economics and in particular was used by Arrow and Debreu in their seminal 1954 paper on general equilibrium, perhaps the central paper in modern traditional economics.

*

In the late 1930’s, von Neumann, based at the IAS in Princeton to which Gödel and Einstein also fled to escape the Nazis, met up with the economist Oskar Morgenstern who was deeply dissatisfied with the state of economics. In 1940, von Neumann began his collaboration on games with Morgenstern, while working on war business including the Manhattan Project and computers, that became The Theory of Games and Economic Behavior (TGEB). By December 1942, he had finished his work on this though it was not published until 1944.

In the Introduction of TGEB, von Neumann explained the real problems in applying mathematics to economics and why Kant was wrong.

‘It is not that there exists any fundamental reason why mathematics should not be used in economics.  The arguments often heard that because of the human element, of the psychological factors etc., or because there is – allegedly – no measurement of important factors, mathematics will find no application, can all be dismissed as utterly mistaken.  Almost all these objections have been made, or might have been made, many centuries ago in fields where mathematics is now the chief instrument of analysis [e.g. physics in the 16th Century or chemistry and biology in the 18th]…

‘As to the lack of measurement of the most important factors, the example of the theory of heat is most instructive; before the development of the mathematical theory the possibilities of quantitative measurements were less favorable there than they are now in economics.  The precise measurements of the quantity and quality of heat (energy and temperature) were the outcome and not the antecedents of the mathematical theory…

‘The reason why mathematics has not been more successful in economics must be found elsewhere… To begin with, the economic problems were not formulated clearly and are often stated in such vague terms as to make mathematical treatment a priori appear hopeless because it is quite uncertain what the problems really are. There is no point using exact methods where there is no clarity in the concepts and issues to which they are applied. [Emphasis added] Consequently the initial task is to clarify the knowledge of the matter by further careful descriptive work. But even in those parts of economics where the descriptive problem has been handled more satisfactorily, mathematical tools have seldom been used appropriately. They were either inadequately handled … or they led to mere translations from a literary form of expression into symbols…

‘Next, the empirical background of economic science is definitely inadequate. Our knowledge of the relevant facts of economics is incomparably smaller than that commanded in physics at the time when mathematization of that subject was achieved.  Indeed, the decisive break which came in physics in the seventeenth century … was possible only because of previous developments in astronomy. It was backed by several millennia of systematic, scientific, astronomical observation, culminating in an observer of unparalleled calibre, Tycho de Brahe. Nothing of this sort has occurred in economics. It would have been absurd in physics to expect Kepler and Newton without Tycho – and there is no reason to hope for an easier development in economics…

‘Very frequently the proofs [in economics] are lacking because a mathematical treatment has been attempted in fields which are so vast and so complicated that for a long time to come – until much more empirical knowledge is acquired – there is hardly any reason at all to expect progress more mathematico. The fact that these fields have been attacked in this way … indicates how much the attendant difficulties are being underestimated. They are enormous and we are now in no way equipped for them.

‘[We will need] changes in mathematical technique – in fact, in mathematics itself…  It must not be forgotten that these changes may be very considerable. The decisive phase of the application of mathematics to physics – Newton’s creation of a rational discipline of mechanics – brought about, and can hardly be separated from, the discovery of the infinitesimal calculus…

‘The importance of the social phenomena, the wealth and multiplicity of their manifestations, and the complexity of their structure, are at least equal to those in physics.  It is therefore to be expected – or feared – that mathematical discoveries of a stature comparable to that of calculus will be needed in order to produce decisive success in this field… A fortiori, it is unlikely that a mere repetition of the tricks which served us so well in physics will do for the social phenomena too.  The probability is very slim indeed, since … we encounter in our discussions some mathematical problems which are quite different from those which occur in physical science.’

Von Neumann therefore exhorted economists to humility and the task of ‘careful, patient description’, a ‘task of vast proportions’. He stressed that economics could not attack the ‘big’ questions – much more modesty is needed to establish an exact theory for very simple problems, and build on those foundations.

‘The everyday work of the research physicist is … concerned with special problems which are “mature”… Unifications of fields which were formerly divided and far apart may alternate with this type of work. However, such fortunate occurrences are rare and happen only after each field has been thoroughly explored. Considering the fact that economics is much more difficult, much less understood, and undoubtedly in a much earlier stage of its evolution as a science than physics, one should clearly not expect more than a development of the above type in economics either…

‘The great progress in every science came when, in the study of problems which were modest as compared with ultimate aims, methods were developed which could be extended further and further. The free fall is a very trivial physical example, but it was the study of this exceedingly simple fact and its comparison with astronomical material which brought forth mechanics. It seems to us that the same standard of modesty should be applied in economics… The sound procedure is to obtain first utmost precision and mastery in a limited field, and then to proceed to another, somewhat wider one, and so on.’

Von Neumann therefore aims in TGEB at ‘the behavior of the individual and the simplest forms of exchange’ with the hope that this can be extended to more complex situations.

‘Economists frequently point to much larger, more ‘burning’ questions…  The experience of … physics indicates that this impatience merely delays progress, including that of the treatment of the ‘burning’ questions. There is no reason to assume the existence of shortcuts…

‘It is a well-known phenomenon in many branches of the exact and physical sciences that very great numbers are often easier to handle than those of medium size. An almost exact theory of a gas, containing about 1025 freely moving particles, is incomparably easier than that of the solar system, made up of 9 major bodies… This is … due to the excellent possibility of applying the laws of statistics and probabilities in the first case.

‘This analogy, however, is far from perfect for our problem. The theory of mechanics for 2,3,4,… bodies is well known, and in its general theoretical …. form is the foundation of the statistical theory for great numbers. For the social exchange economy – i.e. for the equivalent ‘games of strategy’ – the theory of 2,3,4… participants was heretofore lacking. It is this need that … our subsequent investigations will endeavor to satisfy. In other words, only after the theory for moderate numbers of participants has been satisfactorily developed will it be possible to decide whether extremely great numbers of participants simplify the situation.’

[This last bit has changed slightly as I forgot to include a few things.]

While some of von Neumann’s ideas were extremely influential on economics, his general warning here about the right approach to the use of mathematics was not widely heeded.

Most economists initially ignored von Neumann’s ideas.  Martin Shubik, a Princeton mathematician, recounted the scene he found:

‘The contrast of attitudes between the economics department and mathematics department was stamped on my mind… The former projected an atmosphere of dull-business-as-usual conservatism… The latter was electric with ideas… When von Neumann gave his seminar on his growth model, with a few exceptions, the serried ranks of Princeton economists could scarce forebear to yawn.’

However, a small but influential number, including mathematicians at the RAND Corporation (the first recognisable modern ‘think tank’) led by John Williams, applied it to nuclear strategy as well as economics. For example, Albert Wohlstetter published his Selection and Use of Strategic Air Bases (RAND, R-266, sometimes referred to as The Basing Study) in 1954. Williams persuaded the RAND Board and the infamous SAC General Curtis LeMay to develop a social science division at RAND that could include economists and psychologists to explore the practical potential of Game Theory further. He also hired von Neumann as a consultant; when the latter said he was too busy, Williams told him he only wanted the time it took von Neumann to shave in the morning. (Kubrick’s Dr Strangelove satirised RAND’s use of game theory.)

In the 1990’s, the movie A Beautiful Mind brought John Nash into pop culture, giving the misleading impression that he was the principle developer of Game Theory. Nash’s fame rests principally on work he did in 1950-1 that became known as ‘the Nash Equilibrium’. In Non-Cooperative Games (1950), he wrote:

‘[TGEB] contains a theory of n-person games of a type which we would call cooperative. This theory is based on an analysis of the interrelationships of the various coalitions which can be formed by the players of the game. Our theory, in contradistinction, is based on the absence of coalitions in that it is assumed each participant acts independently, without collaboration or communication with any of the others… [I have proved] that a finite non-cooperative game always has at least one equilibrium point.’

Von Neumann remarked of Nash’s results, ‘That’s trivial you know. It’s just a fixed point theorem.’ Nash himself said that von Neumann was a ‘European gentleman’ but was not impressed by his results.

In 1949-50, Merrill Flood, another RAND researcher, began experimenting with staff at RAND (and his own children) playing various games. Nash’s results prompted Flood to create what became known as the ‘Prisoners’ Dilemma’ game, the most famous and studied game in Game Theory. It was initially known as ‘a non-cooperative pair’ and the name ‘Prisoners’ Dilemma’ was given it by Tucker later in 1950 when he had to think of a way of explaining the concept to his psychology class at Stanford and hit on an anecdote putting the payoff matrix in the form of two prisoners in separate cells considering the pros and cons of ratting on each other.

The game was discussed and played at RAND without publishing. Flood wrote up the results in 1952 as an internal RAND memo accompanied by the real-time comments of the players. In 1958, Flood published the results formally (Some Experimental Games). Flood concluded that ‘there was no tendency to seek as the final solution … the Nash equilibrium point.’ Prisoners’ Dilemma has been called ‘the E. coli of social psychology’ by Axelrod, so popular has it become in so many different fields. Many studies of Iterated Prisoners’ Dilemma games have shown that generally neither human nor evolved genetic algorithm players converge on the Nash equilibrium but choose to cooperate far more than Nash’s theory predicts.

Section 7 of my essay discusses some recent breakthroughs, particularly the paper by Press & Dyson. This is also a good example of how mathematicians can invade fields. Dyson’s professional fields are maths and physics. He was persuaded to look at the Prisoners’ Dilemma. He very quickly saw that there was a previously unseen class of strategies that has opened up a whole new field for exploration. This article HERE is a good summary of recent developments.

Von Neumann’s brief forays into economics were very much a minor sideline for him but there is no doubt of his influence. Despite von Neumann’s reservations about neoclassical economics, Paul Samuelson admitted that, ‘He darted briefly into our domain, and it has never been the same since.’

In 1987, the Santa Fe Institute, founded by Gell Mann and others, organised a ten day meeting to discuss economics. On one side, they invited leading economists such as Kenneth Arrow and Larry Summers; on the other side, they invited physicists, biologists, and computer scientists, such as Nobel-winning Philip Anderson and John Holland (inventor of genetic algorithms). When the economists explained their assumptions, Phil Anderson said to them, ‘You guys really believe that?

One physicist later described the meeting as like visiting Cuba – the cars are all from the 1950’s so on one hand you admire them for keeping them going, but on the other hand they are old technology; similarly the economists were ingeniously using 19th Century maths and physics on very out-of-date models. The physicists were shocked at how the economists were content with simplifying assumptions that were obviously contradicted by reality, and they were surprised at the way the economists seemed unconcerned about how poor their predictions were.

Twenty-seven years later, this problem is more acute. Some economists are listening to the physicists about fundamental problems with the field. Some are angrily rejecting the physicists’ incursions into their field.

Von Neumann explained the scientifically accurate approach to economics and mathematics. [Inserted later. I mean – the first part of his comments above that discusses maths, prediction, models, and economics and physics. As far as I know, nobody seriously disputes these comments – i.e. that Kant and the general argument that ‘maths cannot make inroads into economics’ are wrong. The later comments about building up economic theories from theories of 2, 3, 4 agents etc is a separate topic. See comments.] In other blogs in this series I will explore some of the history of economic thinking as part of a description of the problem for politicians and other decision-makers who need to make predictions.

Please leave corrections and comments below.

 

Complexity, ‘fog and moonlight’, prediction, and politics II: controlled skids and immune systems (UPDATED)

‘Politics is a job that can really only be compared with navigation in uncharted waters. One has no idea how the weather or the currents will be or what storms one is in for. In politics, there is the added fact that one is largely dependent on the decisions of others, decisions on which one was counting and which then do not materialise; one’s actions are never completely one’s own. And if the friends on whose support one is relying change their minds, which is something that one cannot vouch for, the whole plan miscarries… One’s enemies one can count on – but one’s friends!’ Otto von Bismarck.

‘Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war… Countless minor incidents – the kind you can never really foresee – combine to lower the general level of performance, so that one always falls short of the intended goal.  Iron will-power can overcome this friction … but of course it wears down the machine as well… Friction is the only concept that … corresponds to the factors that distinguish real war from war on paper.  The … army and everything else related to it is basically very simple and therefore seems easy to manage. But … each part is composed of individuals, every one of whom retains his potential of friction… This tremendous friction … is everywhere in contact with chance, and brings about effects that cannot be measured… Friction … is the force that makes the apparently easy so difficult… Finally … all action takes place … in a kind of twilight, which like fog or moonlight, often tends to make things seem grotesque and larger than they really are.  Whatever is hidden from full view in this feeble light has to be guessed at by talent, or simply left to chance.’ Clausewitz.

*

In July, I wrote a blog on complexity and prediction which you can read HERE.

I will summarise briefly its main propositions and add some others. All page references are to my essay, HERE. (Section 1 explores some of the maths and science issues below in more detail.)

Some people asked me after Part I – why is such abstract stuff important to practical politics? That is a big question but in a nutshell…

If you want to avoid the usual fate in politics of failure, you need to understand some basic principles about why people make mistakes and how some people, institutions, and systems cope with mistakes and thereby perform much better than most. The reason why Whitehall is full of people failing in predictable ways on an hourly basis is because, first, there is general system-wide failure and, second, everybody keeps their heads down focused on the particular and they ignore the system. Officials who speak out see their careers blow up. MPs are so cowed by the institutions and the scale of official failure that they generally just muddle along tinkering and hope to stay a step ahead of the media. Some understand the epic scale of institutional failure but they know that the real internal wiring of the system in the Cabinet Office has such a tight grip that significant improvement will be very hard without a combination of a) a personnel purge and b) a fundamental rewiring of power at the apex of the state. Many people in Westminster are now considering how this might happen. Such thoughts must, I think, be based on some general principles otherwise they are likely to miss the real causes of system failure and what to do.

In future blogs in this series, I will explore some aspects of markets and science that throw light on the question: how can humans and their institutions cope with these problems of complexity, uncertainty, and prediction in order to limit failures?

Separately, The Hollow Men II will focus on specifics of how Whitehall and Westminster work, including Number Ten and some examples from the Department for Education.

Considering the more general questions of complexity and prediction sheds light on why government is failing so badly and how it could be improved.

*

Complexity, nonlinearity, uncertainty, and prediction

Even the simplest practical problems are often very complex. If a Prime Minister wants to line up 70 colleagues in Downing Street to blame them for his woes, there are 70! ways of lining them up and 70! [70! = 70 x 69 x 68 … x 2 x 1] is roughly 10100 (a ‘googol’), which is roughly ten billion times the estimated number of atoms in the universe (1090). [See comments.]

Even the simplest practical problems, therefore, can be so complicated that searching through the vast landscape of all possible solutions is not practical.

After Newton, many hoped that perfect prediction would be possible:

‘An intellect which at a certain moment would know all the forces that animate nature, and all positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, would condense in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes’ (Laplace).

However, most of the most interesting systems in the world – such as brains, cultures, and conflicts – are nonlinear. That is, a small change in input has an arbitrarily large affect on output. Have you ever driven through a controlled skid then lost it? A nonlinear system is one in which you can shift from ‘it feels great on the edge’ to ‘I’m steering into the skid but I’ve lost it and might die in a few seconds’ because of one tiny input change, like your tyre catches a cat’s eye in the wet. This causes further problems for prediction. Not only is the search space so vast it cannot be searched exhaustively, however fast our computers, but in nonlinear systems one has the added problem that a tiny input change can lead to huge output changes.

Some nonlinear systems are such that no possible accuracy of measurement of the current state can eliminate this problem – there is unavoidable uncertainty about the future state. As Poincaré wrote, ‘it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon.’ It does not matter that the measurement error is in the 20th decimal place – the prediction will still quickly collapse.

Weather systems are like this which is why, despite the enormous progress made with predictions, we remain limited to ~10-14 days at best. To push the horizon forward by just one day requires exponential increases in the resources required. Political systems are also nonlinear. If Cohen-Blind’s aim had been very slightly different in May 1866 when he fired five bullets at Bismarck, the German states would certainly have evolved in a different way and perhaps there would have been no fearsome German army led by a General Staff into World War I, no Lenin and Hitler, and so on.  Bismarck himself appreciated this very well. ‘We are poised on the tip of a lightning conductor, and if we lose the balance I have been at pains to create we shall find ourselves on the ground,’ he wrote to his wife during the 1871 peace negotiations in Versailles. Social systems are also nonlinear. Online experiments have explored how complex social networks cannot be predicted because of initial randomness combining with the interdependence of decisions.

In short, although we understand some systems well enough to make precise or statistical predictions, most interesting systems – whether physical, mental, cultural, or virtual – are complex, nonlinear, and have properties that emerge from feedback between many interactions. Exhaustive searches of all possibilities are impossible. Unfathomable and unintended consequences dominate. Problems cascade. Complex systems are hard to understand, predict and control.

Humans evolved in this complex environment amid the sometimes violent, sometimes cooperative sexual politics of small in-groups competing with usually hostile out-groups. We evolved to sense information, process it, and act. We had to make predictions amid uncertainty and update these predictions in response to feedback from our environment – we had to adapt because we have necessarily imperfect data and at best approximate models of reality. It is no coincidence that in one of the most famous speeches in history, Pericles singled out the Athenian quality of adaptation (literally ‘well-turning’) as central to its extraordinary cultural, political and economic success.

How do we make these predictions, how do we adapt? Much of how we operate depends on relatively crude evolved heuristics (rules of thumb) such as ‘sense movement >> run/freeze’. These heuristics can help. Further, our evolved nature gives us amazing pattern recognition and problem-solving abilities. However, some heuristics lead to errors, illusions, self-deception, groupthink and so on – problems that often swamp our reasoning and lead to failure.

I will look briefly at a) the success of science and mathematical models, b) the success of decentralised coordination in nature and markets, and c) the failures of political prediction and decision-making.

*

The success of science and mathematical models

Our brains evolved to solve social and practical problems, not to solve mathematical problems. This is why translating mathematical and logical problems into social problems makes them easier for people to solve (cf. Nielsen.) Nevertheless, a byproduct of our evolution was the ability to develop maths and science. Maths gives us an abstract structure of certain knowledge that we can use to build models of the world. ‘[S]ciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected … correctly to describe phenomena from a reasonably wide area’ (von Neumann).

Because the universe operates according to principles that can be approximated by these models, we can understand it approximately. ‘Why’ is a mystery. Why should ‘imaginary numbers’ based on the square root of minus 1, conceived five hundred years ago and living for hundreds of years without practical application, suddenly turn out to be necessary in the 1920s to calculate how subatomic particles behave? How could it be that in a serendipitous meeting in the IAS cafeteria in 1972, Dyson and Montgomery should realise that an equation describing the distribution of prime numbers should also describe the energy level of particles? We can see that the universe displays a lot of symmetry but we do not know why there is some connection between the universe’s operating principles and our evolved brains’ abilities to do abstract mathematics. Einstein asked, ‘How is it possible that mathematics, a product of human thought that is independent of experience, fits so excellently the objects of physical reality?’ Wigner replied to Einstein in a famous paper, ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences’ (1960) but we do not know the answer. (See ‘Is mathematics invented or discovered?’, Tim Gowers, 2011.)

The accuracy of many of our models gets better and better. In some areas such as quantum physics, the equations have been checked so delicately that, as Feynman said, ‘If you were to measure the distance from Los Angeles to New York to this accuracy, it would be exact to the thickness of a human hair’. In other areas, we have to be satisfied with statistical models. For example, many natural phenomenon, such as height and intelligence, can be modelled using ‘normal distributions’. Other phenomena, such as the network structure of cells, the web, or banks in an economy, can be modelled using ‘power laws’. [* See End] Why do statistical models work? Because ‘chance phenomena, considered collectively and on a grand scale, create a non-random regularity’ (Kolmogorov). [** See End]

Science has also built an architecture for its processes, involving meta-rules, that help correct errors and normal human failings. For example, after Newton the system of open publishing and peer review developed. This encouraged scientists to make their knowledge public, confident that they would get credit (instead of hiding things in code like Newton). Experiments must be replicated and scientists are expected to provide their data honestly so that others can test their claims, however famous, prestigious, or powerful they are. Feynman described the process in physics as involving, at its best, ‘a kind of utter honesty … [Y]ou should report everything that you think might make [your experiment or idea] invalid… [Y]ou must also put down all the facts which disagree with it, as well as those that agree with it… The easiest way to explain this idea is to contrast it … with advertising.’

The architecture of the scientific process is not perfect. Example 1. Evaluation of contributions is hard. The physicist who invented the arXiv was sacked soon afterwards because his university’s tick box evaluation system did not have a way to value his enormous contribution. Example 2. Supposedly ‘scientific’ advice to politicians can also be very overconfident. E.g. A meta-study of 63 studies of the costs of various energy technologies reveals: ‘The discrepancies between equally authoritative, peer-reviewed studies span many orders of magnitude, and the overlapping uncertainty ranges can support almost any ranking order of technologies, justifying almost any policy decision as science based’ (Stirling, Nature, 12/2010).

This architecture and its meta-rules are now going through profound changes, brilliantly described by the author of the seminal textbook on quantum computers, Michael Nielsen, in his book Reinventing Discovery – a book that has many lessons for the future of politics too. But overall the system clearly has great advantages.

The success of decentralised information processing in solving complex problems

Complex systems and emergent properties

Many of our most interesting problems can be considered as networks. Individual nodes (atoms, molecules, genes, cells, neurons, minds, organisms, organisations, computer agents) and links (biochemical signals, synapses, internet routers, trade routes) form physical, mental, and cultural networks (molecules, cells, organisms, immune systems, minds, organisations, internet, biosphere, ‘econosphere’, cultures) at different scales.

The most interesting networks involve interdependencies (feedback and feedforward) – such as chemical signals, a price collapse, neuronal firing, an infected person gets on a plane, or an assassination – and are nonlinear. Complex networks have emergent properties including self-organisation. For example, the relative strength of a knight in the centre of the chessboard is not specified in the rules but emerges from the nodes of the network (or ‘agents’) operating according to the rules.

Even in physics, ‘The behavior of large and complex aggregates of elementary particles … is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear’ (Anderson). This is more obvious in biological and social networks.

Ant colonies and immune systems: how decentralised information processing solves complex problems

Ant colonies and the immune system are good examples of complex nonlinear systems with ‘emergent properties’ and self-organisation.

The body cannot ‘know’ in advance all the threats it will face so the immune system cannot be perfectly ‘pre-designed’. How does it solve this problem?

There is a large diverse population of individual white blood cells (millions produced per day) that sense threats. If certain cells detect that a threat has passed a threshold, then they produce large numbers of daughter cells, with mutations, that are tested on captured ‘enemy’ cells. Unsuccessful daughter cells die while successful ones are despatched to fight. These daughter cells repeat the process so a rapid evolutionary process selects and reproduces the best defenders and continually improves performance. Other specialist cells roam around looking for invaders that have been tagged by antibodies. Some of the cells remain in the bloodstream, storing information about the attack, to guard against future attacks (immunity).

There is a constant evolutionary arms race against bacteria and other invaders. Bacteria take over cells’ machinery and communications. They reprogram cells to take them over or trigger self-destruction. They disable immune cells and ‘ride’ them back into lymph nodes (Trojan horse style) where they attack. They shape-change fast so that immune cells cannot recognise them. They reprogram immune cells to commit suicide. They reduce competition by tricking immune cells into destroying other bacteria that help the body fight infection (e.g. by causing diarrhoea to flush out competition).

NB. there is no ‘plan’ and no ‘central coordination’. The system experiments probabilistically, reinforces success, and discards failure. It is messy. Such a system cannot be based on trying to ‘eliminate failure’. It is based on accepting a certain amount of failure but keeping it within certain tolerances via learning.

Looking at an individual ant, it would be hard to know that an ant colony is capable of farming, slavery, and war.

‘The activity of an ant colony is totally defined by the activities and interactions of its constituent ants. Yet the colony exhibits a flexibility that goes far beyond the capabilities of its individual constituents. It is aware of and reacts to food, enemies, floods, and many other phenomena, over a large area; it reaches out over long distances to modify its surroundings in ways that benefit the colony; and it has a life-span orders of magnitude longer than that of its constituents… To understand the ant, we must understand how this persistent, adaptive organization emerges from the interactions of its numerous constituents.’ (Hofstadter)

Ant colonies face a similar problem to the immune system: they have to forage for food in an unknown environment with an effectively infinite number of possible ways to search for a solution. They send out agents looking for food; those that succeed return to the colony leaving a pheromone trail which is picked up by others and this trail strengthens. Decentralised decisions via interchange of chemical signals drive job-allocation (the division of labour) in the colony. Individual ants respond to the rate of what others are doing: if an ant finds a lot of foragers, it is more likely to start foraging.

Similarities between the immune system and ant colonies in solving complex problems

Individual white blood cells cannot access the whole picture; they sample their environment via their receptors. Individual ants cannot cannot access the whole picture; they sample their environment via their chemical processors. The molecular shape of immune cells and the chemical processing abilities of ants are affected by random mutations; the way individual cells or ants respond has a random element. The individual elements (cells / ants) are programmed to respond probabilistically to new information based on the strength of signals they receive.

Environmental exploration by many individual agents coordinated via feedback signals allows a system to probe many different probabilities, reinforce success, ‘learn’ from failure (e.g withdraw resources from unproductive strategies), and keep innovating (e.g novel cells are produced even amid a battle and ants continue to look for better options even after striking gold). ‘Redundancy’ allows local failures without breaking the system. There is a balance between exploring the immediate environment for information and exploiting that information to adapt.

In such complex networks with emergent properties, unintended consequences dominate. Effects cascade: ‘they come not single spies but in battalions’. Systems defined as ‘tightly coupled‘ – that is, they have strong interdependencies so that the behaviour of one element is closely connected to another – are not resilient in the face of nonlinear events (picture a gust of wind knocking over one domino in a chain).

Network topology

We are learning how network topology affects these dynamics. Many networks (including cells, brains, the internet, the economy) have a topology such that nodes are distributed according to a power law (not a bell curve), which means that the network looks like a set of  hubs and spokes with a few spokes connecting hubs. This network topology makes them resilient to random failure but vulnerable to the failure of critical hubs that can cause destructive cascades (such as financial crises) – an example of the problems that come with nonlinearity.

Similar topology and dynamics can be seen in networks operating at very different scales ranging from cellular networks, the brain, the financial system, the economy in general, and the internet. Disease networks often shows the same topology, with certain patients, such as those who get on a plane from West Africa to Europe with Ebola, playing the role of critical hubs connecting different parts of the network. Terrorist networks also show the same topology. All of these complex systems with emergent properties have the same network topology and are vulnerable to the failure of critical hubs.

Many networks evolve modularity. A modular system is one in which specific modules perform specific tasks, with links between them allowing broader coordination. This provides greater effectiveness and resilience to shocks. For example, Chongqing in China saw the evolution of a new ecosystem for designing and building motorbikes in which ‘assembler’ companies assemble modular parts built by competing companies, instead of relying on high quality vertically integrated companies like Yamaha. This rapidly decimated Japanese competition. Connections between network topology, power laws and fractals can be seen in work by physicist Geoffrey West both on biology and cities, for it is clear that just as statistical tools like the Central Limit Theorem demonstrate similar structure in completely different systems and scales, so similar processes occur in biology and social systems. [See Endnote.]

Markets: how decentralised information processing solves complex problems

[Coming imminently]

A summary of the progress brought by science and markets

The combination of reasoning, reliable accumulated knowledge, and a reliable institutional architecture brings steady progress, and occasional huge breakthroughs and wrong turns, in maths and science. The combination of the power of decentralised information processing to find solutions to complex problems and an institutional architecture brings steady progress, and occasional huge breakthroughs and wrong turns, in various fields that operate via markets.

Fundamental to the institutional architecture of markets and science is mechanisms that enable adaptation to errors. The self-delusion and groupthink that is normal for humans – being a side-effect of our nature as evolved beings – is partly countered by tried and tested mechanisms. These mechanisms are not based on an assumption that we can ‘eliminate failure’ (as so many in politics absurdly claim they will do). Instead, the assumption is that failure is a persistent phenomenon in a complex nonlinear world and it must be learned from and adapted to as quickly as possible. Entrepreneurs and scientists can be vain, go mad, or be prone to psychopathy – like public servants – but we usually catch it quicker and it causes less trouble. Catching errors, we inch forward ‘standing on the shoulders of giants’ as Newton put it.

Science has enabled humans to make transitions from numerology to mathematics, from astrology to astronomy, from alchemy to chemistry, from witchcraft to neuroscience, from tallies to quantum computation. Markets have been central to a partial transition in a growing fraction of the world from a) small, relatively simple, hierarchical, primitive, zero-sum hunter-gatherer tribes based on superstition (almost total ignorance of complex systems), shared aims, personal exchange and widespread violence, to b) large, relatively complex, decentralised, technological, nonzero-sum market-based cultures based on science (increasingly accurate predictions and control in some fields), diverse aims, impersonal exchange, trade, private property, and (roughly) equal protection under the law.

*

The failures of politics: wrong predictions, no reliable mechanisms for fixing obvious errors

 ‘No official estimates even mentioned that the collapse of Communism was a distinct possibility until the coup of 1989.’ National Security Agency, ‘Dealing With the Future’, declassified report. 

However, the vast progress made in so many fields is clearly not matched in standards of government. In particular, it is very rare for individuals or institutions to make reliable predictions.

The failure of prediction in politics

Those in leading positions in politics and public service have to make all sorts of predictions. Faced with such complexity, politicians and others have operated mostly on heuristics (‘political philosophy’), guesswork, willpower and tactical adaptation. My own heuristics for working in politics are: focus, ‘know yourself’ (don’t fool yourself), think operationally, work extremely hard, don’t stick to the rules, and ask yourself ‘to be or to do?’.

Partly because politics is a competitive enterprise in which explicit and implicit predictions elicit countermeasures, predictions are particularly hard. This JASON report (PDF) on the prediction of rare events explains some of the technical arguments about predicting complex nonlinear systems such as disasters. Unsurprisingly, so-called ‘political experts’ are not only bad at predictions but are far worse than they realise. There are many prominent examples. Before the 2000 election, the American Political Science Association’s members unanimously predicted a Gore victory. Beyond such examples, we have reliable general data on this problem thanks to a remarkable study by Philip Tetlock. He charted political predictions made by supposed ‘experts’ (e.g will the Soviet Union collapse, will the euro collapse) for fifteen years from 1987 and published them in 2005 (‘Expert Political Judgement’). He found that overall, ‘expert’ predictions were about as accurate as monkeys throwing darts at a board. Experts were very overconfident: ~15 percent of events that experts claimed had no chance of occurring did happen, and ~25 percent of those that they said they were sure would happen did not happen. Further, the more media interviews an expert did, the less likely they were to be right. Specific expertise in a particular field was generally of no value; experts on Canada were about as accurate on the Soviet Union as experts on the Soviet Union were.

However, some did better than others. He identified two broad categories of predictor. The first he called ‘hedgehogs’ – fans of Big Ideas like Marxism, less likely to admit errors. The second he called ‘foxes’ – not fans of Big Ideas, more likely to admit errors and change predictions because of new evidence. (‘The fox knows many little things, but the hedgehog knows one big thing,’ Archilochus.) Foxes tended to make better predictions. They are more self-critical, adaptable, cautious, empirical, and multidisciplinary. Hedgehogs get worse as they acquire more credentials while foxes get better with experience. The former distort facts to suit their theories; the latter adjust theories to account for new facts.

Tetlock believes that the media values characteristics (such as Big Ideas, aggressive confidence, tenacity in combat and so on) that are the opposite of those prized in science (updating in response to new data, admitting errors, tenacity in pursuing the truth and so on). This means that ‘hedgehog’ qualities are more in demand than ‘fox’ qualities, so the political/media market encourages qualities that make duff predictions more likely. ‘There are some academics who are quite content to be relatively anonymous. But there are other people who aspire to be public intellectuals, to be pretty bold and to attach non-negligible probabilities to fairly dramatic change. That’s much more likely to bring you attention’ (Tetlock).

Tetlock’s book ought to be much-studied in Westminster particularly given 1) he has found reliable ways of identifying a small number of people who are very good forecasters and 2)  IARPA (the intelligence community’s DARPA twin) is working with Tetlock to develop training programmes to improve forecasting skills. [See Section 6.] Tetolock says, ‘We now have a significant amount of evidence on this, and the evidence is that people can learn to become better. It’s a slow process. It requires a lot of hard work, but some of our forecasters have really risen to the challenge in a remarkable way and are generating forecasts that are far more accurate than I would have ever supposed possible from past research in this area.’ (This is part of IARPA’s ACE programme to develop aggregated forecast systems and crowdsourced prediction software. IARPA also has the SHARP programme to find ways to improve problem-solving skills for high-performing adults.)

His main advice? ‘If I had to bet on the best long-term predictor of good judgement among the observers in this book, it would be their commitment – their soul-searching Socratic commitment – to thinking about how they think’ (Tetlock). His new training programmes help people develop this ‘Socratic commitment’ and correct their mistakes in quite reliable ways.

NB. The extremely low quality of political forecasting is what allowed an outsider like Nate Silver to transform the field simply by applying some well-known basic maths.

The failure of prediction in economics

‘… the evidence from more than fifty years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. Typically at least two out of every three mutual funds underperform the overall market in any given year. More important, the year-to-year correlation between the outcomes of mutual funds is very small, barely higher than zero. The successful funds in any given year are mostly lucky; they have a good roll the dice.’ Daniel Kahneman, winner of the economics ‘Nobel’ (not the same as the Nobel for physical sciences).

‘I importune students to read narrowly within economics, but widely in science…The economic literature is not the best place to find new inspiration beyond these traditional technical methods of modelling’ Vernon Smith, winner of the economics ‘Nobel’. 

I will give a few examples of problems with economic forecasting.

In the 1961 edition of his famous standard textbook used by millions of students, one of the 20th Century’s most respected economists, Paul Samuelson, predicted that respective growth rates in America and the Soviet Union meant the latter would overtake the USA between 1984-1997. By 1980, he had delayed the date to be in 2002-2012. Even in 1989, he wrote, ‘The Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive.’

Chart: Samuelson’s prediction for the Soviet economy 

samuelson

The recent financial crisis also demonstrated many failed predictions. Various people, including physicists Steve Hsu and Eric Weinstein, published clear explanations of the extreme dangers in the financial markets and parallels with previous crashes such as Japan’s. However, they were almost totally ignored by politicians, officials, central banks and so on. Many of those involved were delusional. Perhaps most famously, Joe Cassano of AIG Financial said in a conference call (8/2007): ‘It’s hard for us – without being flippant – to even see a scenario within any kind of realm of reason that would see us losing one dollar in any of those transactions… We see no issues at all emerging.’

Nate Silver recently summarised some of the arguments over the crash and its aftermath. In December 2007, economists in the Wall Street Journal forecasting panel predicted only a 38 percent chance of recession in 2008. The Survey of Professional Forecasters is a survey of economists’ predictions done by the Federal Reserve Bank that includes uncertainty measurements. In November 2007, the Survey showed a net prediction by economists that the economy would grow by 2.4% in 2008, with a less than 3% chance of any recession and a 1-in-500 chance of it shrinking by more than 2%.

Chart: the 90% ‘prediction intervals’ for the Survey of Professional Forecasters net forecast of GDP growth 1993-2010

Prediction econ

If the economists’ predictions were accurate, the 90% prediction interval should be right nine years out of ten, and 18 out of 20. Instead, the actual growth was outside the 90% prediction interval six times out of 18, often by a lot. (The record back to 1968 is worse.) The data would later reveal that the economy was already in recession in the last quarter of 2007 and, of course, the ‘1-in-500’ event of the economy shrinking by more than 2% is exactly what happened.**

Although the total volume of home sales in 2007 was only ~$2 trillion, Wall Street’s total volume of trades in mortgage-backed securities was ~$80 trillion because of the creation of ‘derivative’ financial instruments. Most people did not understand 1) how likely a house price fall was, 2) how risky mortgage-backed securities were, 3) how widespread leverage could turn a US housing crash into a major financial crash, and 4) how deep the effects of a major financial crash were likely to be.  ‘The actual default rates for CDOs were more than two hundred times higher than S&P had predicted’ (Silver). In the name of ‘transparency’, S&P provided the issuers with copies of their ratings software allowing CDO issuers to experiment on how much junk they could add without losing a AAA rating. S&P even modelled a potential housing crash of 20% in 2005 and concluded its highly rated securities could ‘weather a housing downturn without suffering a credit rating downgrade.’

Unsurprisingly, Government unemployment forecasts were also wrong. Historically, the uncertainty in an unemployment rate forecast made during a recession had been about plus or minus 2 percent but Obama’s team, and economists in general, ignored this record and made much more specific predictions. In January 2009, Obama’s team argued for a large stimulus and said that, without it, unemployment, which had been 7.3% in December 2008, would peak at ~9% in early 2010, but with the stimulus it would never rise above 8% and would fall from summer 2009. However, the unemployment numbers after the stimulus was passed proved to be even worse than the ‘no stimulus’ prediction. Similarly, the UK Treasury’s forecasts about growth, debt, and unemployment from 2007 were horribly wrong but that has not stopped it making the same sort of forecasts.

Paul Krugman concluded from this episode: the stimulus was too small. Others concluded it had been a waste of money. Academic studies vary widely in predicting the ‘return’ from each $1 of stimulus. Since economists cannot even accurately predict a recession when the economy is already in recession, it seems unlikely that there will be academic consensus soon on such issues. Economics often seems like a sort of voodoo for those in power – spurious precision and delusions that there are sound mathematical foundations for the subject without a proper understanding of the conditions under which mathematics can help (cf. Von Neumann on maths and prediction in economics HERE).

Fields which do better at prediction

Daniel Kahneman, who has published some of the most important research about why humans make bad predictions, summarises the fundamental issues about when you can trust expert predictions:

‘To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.’ (Emphasis added.)

It is obvious that politics fulfils neither of his two criteria – it does not even have hard data and clear criteria for success, like stock picking.

I will explore some of the fields that do well at prediction in a future blog.

*

The consequences of the failure of politicians and other senior decision-makers and their institutions

‘When superior intellect and a psychopathic temperament coalesce …, we have the best possible conditions for the kind of effective genius that gets into the biographical dictionaries’ (William James). 

‘We’re lucky [the Unabomber] was a mathematician, not a molecular biologist’ (Bill Joy, Silicon Valley legend, author of ‘Why the future doesn’t need us’).

While our ancestor chiefs understood bows, horses, and agriculture, our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The consequences are increasingly dangerous as markets, science and technology disrupt all existing institutions and traditions, and enhance the dangerous potential of our evolved nature to inflict huge physical destruction and to manipulate the feelings and ideas of many people (including, sometimes particularly, the best educated) through ‘information operations’. Our fragile civilisation is vulnerable to large shocks and a continuation of traditional human politics as it was during 6 million years of hominid evolution – an attempt to secure in-group cohesion, prosperity and strength in order to dominate or destroy nearby out-groups in competition for scarce resources – could kill billions. We need big changes to schools, universities, and political and other institutions for their own sake and to help us limit harm done by those who pursue dreams of military glory, ‘that attractive rainbow that rises in showers of blood’ (Lincoln).

The global population of people with an IQ four standard deviations above the average (i.e. >160) is ~250k. About 1% of the population are psychopaths so there are perhaps ~2-3,000 with IQ ≈ Nobel/Fields winner. The psychopathic +3SD IQ (>145; average science PhD ~130) population is 30 times bigger. A subset will also be practically competent. Some of them may think, ‘Flectere si nequeo superos, / Acheronta movebo’ (‘If Heav’n thou can’st not bend, Hell thou shalt move’, the Aeneid). Board et al (2005) showed that high-level business executives are more likely than inmates of Broadmoor to have one of three personality disorders (PDs): histrionic PD, narcissistic PD, and obsessive-compulsive PD. Mullins-Sweatt et al (2010) showed that successful psychopaths are more conscientious than the unsuccessful.

A brilliant essay (here) by one of the 20th Century’s best mathematicians, John von Neumann, describes these issues connecting science, technology, and how institutions make decisions.

*

Some conclusions

When we consider why institutions are failing and how to improve them, we should consider the general issues discussed above. How to adapt quickly to new information? Does the institution’s structure incentivise effective adaptation or does it incentivise ‘fooling oneself’ and others? Is it possible to enable distributed information processing to find a ‘good enough’ solution in a vast search space? If your problem is similar to that of the immune system or ant colony, why are you trying to solve it with a centralised bureaucracy?

Further, some other obvious conclusions suggest themselves.

We could change our society profoundly by dropping the assumption that less than a tenth of the population is suitable to be taught basic concepts in maths and physics that have very wide application to our culture, such as normal distributions and conditional probability. This requires improving basic maths 5-16 and it also requires new courses in schools.

One of the things that we did in the DfE to do this was work with Fields Medallist Tim Gowers on a sort of ‘Maths for Presidents’ course. Professor Gowers wrote a fascinating blog on this course which you can read HERE. The DfE funded MEI to develop the blog into a real course. This has happened and the course is now being developed in schools. Physics for Future Presidents already exists and is often voted the most popular course at UC Berkeley (Cf. HERE). School-age pupils, arts graduates, MPs, and many Whitehall decision-makers would greatly benefit from these two courses.

We also need new inter-disciplinary courses in universities. For example, Oxford could atone for PPE by offering Ancient and Modern History, Physics for Future Presidents, and How to Run a Start Up. Such courses should connect to the work of Tetlock on The Good Judgement Project, as described above (I will return to this subject).

Other countries have innovated successfully in elite education. For example, after the shock of the Yom Kippur War, Israel established the ‘Talpiot’ programme which  ‘aims to provide the IDF and the defense establishment with exceptional practitioners of research and development who have a combined understanding in the fields of security, the military, science, and technology. Its participants are taught to be mission-oriented problem-solvers. Each year, 50 qualified individuals are selected to participate in the program out of a pool of over 7,000 candidates. Criteria for acceptance include excellence in physical science and mathematics as well as an outstanding demonstration of leadership and character. The program’s training lasts three years, which count towards the soldiers’ three mandatory years of service. The educational period combines rigorous academic study in physics, computer science, and mathematics alongside intensive military training… During the breaks in the academic calendar, cadets undergo advanced military training… In addition to the three years of training, Talpiot cadets are required to serve an additional six years as a professional soldier. Throughout this period, they are placed in assorted elite technological units throughout the defense establishment and serve in central roles in the fields of research and development’ (IDF, 2012). The programme has also helped the Israeli hi-tech economy.****

If politicians had some basic training in mathematical reasoning, they could make better decisions amid complexity. If politicians had more exposure to the skills of a Bill Gates or Peter Thiel, they would be much better able to get things done.

I will explore the issue of training for politicians in a future blog.

Please leave corrections and comments below.


* It is very important to realise when the system one is examining is well approximated by a normal distribution and when by a power law. For example… When David Viniar (Goldman Sachs CFO) said of the 2008 financial crisis, ‘We were seeing things that were 25-standard-deviation events, several days in a row,’ he was discussing financial prices as if they can be accurately modelled by a normal distribution, and implying that events that should happen once every 10135 years (the Universe is only ~1.4×1010 years old) were occurring ‘several days in a row’. He was either ignorant of basic statistics (unlikely) or taking advantage of the statistical ignorance of his audience. Actually, we have known for a long time that financial prices are not well modelled using normal distributions because they greatly underestimate the likelihood of bubbles and crashes. If politicians don’t know what ‘standard deviation’ means, it is obviously impossible for them to contribute much to detailed ideas on how to improve bank regulation. It is not hard to understand standard deviation and there is no excuse for this situation to continue for another generation.

** However, there is also a danger in the use of statistical models based on ‘big data’ analysis – ‘overfitting’ models and wrongly inferring a ‘signal’ from what is actually ‘noise’. We usually a) have a noisy data set and b) an inadequate theoretical understanding of the system, so we do not know how accurately the data represents some underlying structure (if there is such a structure). We have to infer a structure despite these two problems. It is easy in these circumstances to ‘overfit’ a model – to make it twist and turn to fit more of the data than we should, but then we are fitting it not to the signal but to the noise. ‘Overfit’ models can seem to explain more of the variance in the data – but they do this by fitting noise rather than signal (Silver, op. cit).

This error is seen repeatedly in forecasting, and can afflict even famous scientists. For example, Freeman Dyson tells a short tale about how, in 1953, he trekked to Chicago to show Fermi the results of a new physics model for the strong nuclear force. Fermi dismissed his idea immediately as having neither ‘a clear physical picture of the process that you are calculating’ nor ‘a precise and self-consistent mathematical formalism’. When Dyson pointed to the success of his model, Fermi quoted von Neumann,  ‘With four parameters I can fit an elephant, and with five I can make him wiggle his trunk’, thus saving Dyson from wasting years on a wrong theory (A meeting with Enrico Fermi, by Freeman Dyson). Imagine how often people who think they have a useful model in areas not nearly as well-understood as nuclear physics lack a Fermi to examine it carefully.

There have been eleven recessions since 1945 but people track millions of statistics. Inevitably, people will ‘overfit’ many of these statistics to model historical recessions then ‘predict’ future ones.  A famous example is the Superbowl factor. For 28 years out of 31, the winner of the Superbowl correctly ‘predicted’ whether the stock exchange rose or fell. A standard statistical test ‘would have implied that there was only about a 1-in-4,700,000 possibility that the relationship had emerged from chance alone.’ Just as someone will win the lottery, some arbitrary statistics will correlate with the thing you are trying to predict just by chance (Silver)

*** Many of these wrong forecasts were because the events were ‘out of sample’. What does this mean? Imagine you’ve taken thousands of car journeys and never had a crash. You want to make a prediction about your next journey. However, in the past you have never driven drunk. This time you are drunk. Your prediction is therefore out of sample. Predictions of US housing data were based on past data but there was no example of such huge leveraged price rises in the historical data. Forecasters who looked at Japan’s experience in the 1980’s better realised the danger. (Silver)

**** The old Technical Faculty of the KGB Higher School (rebaptised after 1991) ran similar courses; one of its alumni is Yevgeny Kaspersky, whose company first publicly warned of the cyberweapons Stuxnet and Flame (and who still works closely with his old colleagues). It would be interesting to collect information on elite intelligence and special forces training programmes. E.g. Post-9/11, US special forces (acknowledged and covert) have greatly altered including adding intelligence roles that were previously others’ responsibility or regarded as illegal for DOD employees. How does what is regarded as ‘core training’ for such teams vary, how is it changing, and why are some better than others at decisions under pressure and surviving disaster?