Why do pundits get it wrong?
Why are predictions so hard in sport? The capture of the Premier League title by Leicester City has been cited by many pundits as the greatest shock in sporting history. Is this grand claim little more than evidence of a cognitive bias? ‘Who could have known?’ Certainly not the same pundits who had almost unanimously predicted Leicester’s relegation. Exaggerating the scale of the surprise doubles as a kind of cognitive – and reputational – protection: ‘We were so wrong, yes, but we were right to be.’ I am not trying to argue that the result is not a statistical anomaly, or that anyone should have predicted Leicester’s great rise. It is certainly extraordinary, no more predictable than Chelsea’s parallel collapse. But clearly occurrences like this are possible and misjudgements in punditry – most especially faulty predictions – are extremely frequent. So why do they happen?
The now famous sporting book “Moneyball” exposes the errors of trusting purely human judgement. The book is evidence that baseball scouts who, much like football pundits, predict sporting performance and outcomes, have a tendency not only to misjudge certain attributes but also to value the wrong attributes entirely. Some of the same errors made by baseball scouts exposed in “Moneyball” could well be made by football pundits. Using past performance as an indicator of future performance is a simple predictive rule that is easily to follow. But evidence of past performance is more useful with a large amount of data and with a stable environment in which players are not moving clubs, getting injured or ageing. The football league is relatively young in data terms – only slightly over 100 completed leagues and clearly the teams and players change markedly across these time frames.
If you combine a club’s recent performance with perceptions of their players’ quality based on price tags, this could lead to a prediction that feels roughly justified. But using the combined price tag of the team may be missing the point on player quality. The free market that drives player buying and selling is one that does not purely consider player quality in its valuations. You could argue that a large proportion of a player’s monetary value is associated to their marketability and sponsorship attractiveness. It is easy to misconstrue this “value” as evidence they must be better players. Criticism is often levelled at Arsene Wenger for failing to spend big money. But one of his most disappointing performers for a large proportion of the season, Alexis Sanchez, cost 36 million pounds.
Leicester have clearly found a method of recruitment that finds a loophole in the player valuation system. Purchasing players to perfectly fit a desired system, or in fact shaping the system to fit their strengths may be more important than their individual quality. This demonstrates that in making complex judgements it is often tempting to resort to simple – or merely available – pieces of evidence and rudimentary rules of thumb, heuristics: “He must be good, he cost 30 million pounds”. If the market and scouting system worked to its top potential why did Jamie Vardy spend time in the lower divisions? It can’t be that he has improved unimaginably in just a few years. Clearly, judging players and clubs – and predicting future performance – takes considerable analysis and consideration. This time and thought process may not be viable when forced to make decisions by media deadlines or on the spot interviews. It may also be tempting to make decisions without such depth of considerations, the mental energy saved is high and the social value of opinions is a strong motivation. The incentive to stop making predictions – even after such high-profile failures – is nowhere near a counterbalance.
In physiotherapy, too, we need to consider which pieces of evidence are the most crucial and which are merely misleading. Following the conventional wisdom isn’t always enough.