Editor’s Notes
Most of us are fascinated with forecasting – even though, as a rule, we humans tend to over-estimate how good we are at getting it right.
There certainly seems to be no shortage of confidence when it comes to assessing our general abilities. Why else would 94% of Swedes believe their driving skills put them in the top half of Swedish drivers, whilst 84% of French men feel their love-making prowess puts them in the top half of French lovers? (Source N. Taleb). A far cry, it seems, from “the meek shall inherit the earth”.
Such over-confidence can also lead to some people getting carried away with their powers of prediction.
I propose to set the ball rolling by calling to account a few unwitting participants who didn`t quite get it right recently and look forward to you helping me keep an eye on other transgressors in the future.
Intellectual Background
Whilst this website adopts a strong “tongue in cheek” approach to ensuring that experts remain accountable, it does take its origins from rather more serious research. My own interest, after nearly forty years working within a financial services environment, developed with the dawning realisation that the procession of sales people who visited my offices were, on the whole, relaying Head Office information that turned out to be wrong as often as it was right.
It will of course come as no surprise to the reader that financial services companies can put out information which is self-serving and not always accurate. What did surprise me, however, was that so much research has been carried out into the dangers of over-confidence from experts, and the susceptibility that most of us can have to their perceived words of wisdom.
I suppose the most popular example of how our world is too complex for future events to be accurately predicted came in Nassim Nicholas Taleb`s excellent book, “The Black Swan”. On a personal note, this certainly represented the catalyst for further research and investigation.
For those of you who may also be interested in drilling down a bit further, I offer below a sample of work which you may consider useful as a basis for further investigation.
Dan Gardener
In his book, “Future Babble”, Gardner asks, if expert forecasts are such a disappointment, why do we seem to be so addicted to them? Although we know forecasters are often humbled, it doesn`t stop our strange fascination with forecasting: we love to be told, confidently, what will happen next.
Philip Tetlock
(See “Expert Political Judgment”) Tetlock studied the business of political and economic experts. Economists represented about a quarter of his sample. His study revealed that experts` error rates were clearly many times what they had estimated. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative effect of reputation on prediction: those who had a big reputation were worse predictors than those who had none.
Tetlock`s focus was not so much to show the real competence or otherwise of experts as to investigate why the experts did not realise they were not so good at their own business – in other words, how they spun their stories. There seemed to be logic to such incompetence, mostly in the form of belief defence, or the protection of their self-esteem.
Tim Harford
In his guise as, “The Undercover Economist”, Tim Harford wrote an excellent piece in the F.T Magazine (18/06/2011) in which he also asked why we didn`t follow up on forecasts. We assume that forecasts are the result of brilliance rather than luck, and we fail to call people on “forecasts-gone-wrong”. He quotes Louis Menand of “The New Yorker”, who felt the best lesson of Tetlock`s book may be one that he seems most reluctant to draw: “Think for yourself.”
Harford concludes with the statement, “The problem is not the experts. It`s that the world is simply too complicated for anyone to analyse with much success. If the road ahead is unknowable, the ability to change direction should not be underrated.”
Kathryn Schulz
In her excellent book, “Being Wrong”, Kathryn Schulz (a “wrongologist” – as she likes to call herself) reinforces the point that to err is human, and makes a compelling case for not just admitting, but embracing, human fallibility.
J. Denrell and C. Fang
In June 2010, Jerker Denrell (Stanford Graduate School of Business) and Christina Fang (New York University) published a paper entitled: “Predicting the Next Big Thing: Success as a Signal of Poor Judgment”, the abstract for which went:
“Successfully predicting that something will become a big hit seems impressive. Managers and entrepreneurs who have made successful predictions and invested money on this basis are promoted, become rich, and may end up on the cover of business magazines. In this paper, we show that an accurate prediction about such an extreme event, e.g. a big hit, may in fact be an indication of poor rather than good forecasting ability. We first demonstrate how this conclusion can be derived from a formal model of forecasting. We then illustrate that the basic result is consistent with data from two lab experiments as well as field data on professional forecasts from the Wall Street Journal Survey of Economic Forecasts”.
Nicholas Taleb
Never a man to hold back on his views, Taleb feels that we are demonstrably arrogant about what we think we know. He refers to “the scandal of prediction” and wonders why we don`t talk about our record in predicting. Why don`t we see how we (almost) always miss big events? We overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e. by deliberately reducing the state of the unknown) and simplifying things to suit our rationale.
Once we produce a theory, we are not likely to change our minds. However, once opinions have been developed on the basis of weak evidence, it is difficult to interpret subsequent information that contradicts these opinions, even if this new information is obviously more accurate.
Taleb singles out professions that deal with the future and base their studies on the non-repeatable past as having an “expert” problem (e.g. economists, financial advisers, etc.). The problem with experts, with echoes of Donald Rumsfeld, is that, “they do not know what they do not know”.
When it comes to forecasting, nobody wants to be, “off the wall”, and so the result is invariably herding. Economic forecasters, it is noted, tend to fall closer to one another than to the resulting outcome.
Out of close to one million papers published in politics, finance and economics, there have only been a small number of checks on the predictive quality of such knowledge. We humans attribute our successes to our skills, and our failures to external events outside our control, namely to randomness – the “almost right” defence. The problem is that random “Black Swan” events tend to alter outcomes to such an extent that original forecasts become worthless. We do not realise the full extent of the difference between near and far futures, yet the degradation in such forecasting through time is evident.
Daniel Kahneman
In his “New York Times” article: “Don`t Blink! The Hazards of Confidence”, Daniel Kahneman relates an anecdote in which he formed part of a team evaluating and selecting potential army leaders from controlled group exercises. He recalls, “When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment.” They rarely experienced doubt or conflicting impressions and felt no need to question their forecasts, moderate them or equivocate. However, on receiving subsequent feedback, their ability to predict performance actually turned out to be negligible – “Our forecasts were better than blind guesses, but not by much.” Kahneman was so struck by this experience that he coined a phrase for it – “the illusion of validity”.
Decades later, Kahneman still asserts that, “exaggerated expectation of consistency is a common error”. He goes on to say that we are all prone to think that the world is more regular and predictable than it really is because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. “The bias towards coherence favours over-confidence”.
More to the point, Kahneman applies the same principle to his experiences with a Wall Street firm and with investing in general, stating, “Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match.” His former student, Terry Odean, finance professor at the University of California, Berkeley, analysed the trading records of 10,000 brokerage accounts of individual investors over a seven year period. He found that, on average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3% per annum, in addition to the significant costs of executing the trades – “the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas”.
Odean subsequently (with his colleague Brad Barber) published a paper entitled, “Trading Is Hazardous to Your Wealth”. In another paper, “Boys Will Be Boys,” they reported that men acted on their useless ideas significantly more often than women do, and that as a result women achieve better investment returns than men. Moreover, apart from selling their “winners” in a premature effort to lock in gains, individual investors also tended to buy the wrong stocks by predictably flocking to stocks in companies that are in the news.
Kahneman`s view is that, despite the best efforts of fund managers, the evidence of more than fifty years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year. “In highly efficient markets, educated guesses are no more accurate than blind guesses.” This excellent paper (which is worth reading in its entirety) concludes with the following compelling advice:
“You should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow; overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.”