Remember when Qassem Soleimani was assassinated by the United States at the start of 2020 and we all thought that would be the cataclysmic moment of the year? Right...

As we're still in the throes of the pandemic, it's hubristic to forecast how the world will irrevocably change after all of this is 'over'. There's just too much complexity, too many unknowns, and even too many 'unknown unknowns', as Donald Rumsfeld, former US Secretary of Defense, once said of the limits of the intelligence community's abilities to predict and prevent attacks.

There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don't know we don't know.

The mathematical terminology is 'epistemic' and 'aleatory' uncertainty - the former being uncertainty due to a lack of knowledge whilst the latter is uncertainty attributable to the inherent randomness in a process. To give you an example, we're still learning how COVID-19 affects younger people with 'long-COVID', but we had no way of knowing that a considerably more transmissible strain of the virus would emerge as it has over the last few weeks.

The influential philosopher and author Nassim Taleb has written a five-volume series of philosophical essays called Incerto - Latin for uncertainty. Perhaps the most popular of these was 'Black Swan', titled after the Black Swan Theory that rails against the human tendency to rationalise events that were a priori extremely unlikely. Like European in the 1600s who had hitherto only seen white swans and therefore couldn't fathom the sight of black swans on a ship returning from Australia, no one alive today had the recollection of an event as profoundly disruptive and transformative as COVID-19 shutdown the world.

Yet another lens with which to appreciate the uncertainty of the world is the 'butterfly effect', proposed by meteorology professor Edward Lorenz in his provocatively titled 1972 research paper 'Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?' (and popularised by the movie of the same name, starring Ashton Kutcher). Lorenz began to form the concept when he was running computer models of the weather and noticed how a tiny change in one variable (0.506127 to 0.506) had a drastic effect on simulated weather patterns, up to 2 months out. Taking this logic to its extreme, it's plausible that a butterfly flapping its wings triggers a set of events that ultimately culminate in a tornado. Weather is an example of a nonlinear system, where the intricate chains of causation and dependencies are incalculable, far beyond human comprehension.

I'll close this point with some words from the late Carl Sagan, the famous astronomer and certainly one of my intellectual heroes:

Long ago, when an early galaxy began to pour light out in to the surrounding darkness no witness could have known that billions of years later. Some remote clumps of rock and metal, ice and organic molecules would fall together to form a place that we call earth. And surely nobody could have imagined that life would arise, and thinking beings evolve who would one day capture a fraction of that light and would try to puzzle out what sent it on its way.

Superforecasting

Yuval Noah Harari's account for our disposition for storytelling is perhaps the most widely-read and masterful explanation (Sapiens); nondeterminism and randomness can cause existential dread, that much is clear (and beyond the scope of this post). However, as it happens, we do possess some forecasting abilities..

Ever heard of that study that found that most expert predictions are no more accurate than a chimpanzee randomly throwing darts? Sure you have. Chances are that you've been regaled by your university professors on the virtues of this study, particularly in the context of investing and the active vs. passive debate.

The author of that study is Philip Tetlock, a professor of psychology at the University of Pennsylvania. The study, conducted over a 20-year period from 1984 to 2004, canvassed the forecasts of thousands of experts - academics, pundits and the like. It's true that most expert predictions were no better than random guessing (or, put more amusingly, chimps throwing darts), but that glances over the fact that there were some outliers who were startlingly accurate over shorter ranges.

As we've discussed, nonlinear systems are complex beyond measure. That being said, we can draw some solace from the fact that we do have some ability to forecast into the future. It is to this group of people that Philip Tetlock dedicated his time and attention in the aftermath of the study - superforecasters.

Tetlock set up the Good Judgment Project and invited thousands of volunteers to partake in forecasting exercises; all bases were covered to ensure that the best, most consistent forecasters could be identified: superforecasters. Tetlock documented his results in 'Superforecasting: The Art and Science of Prediction', which provides some valuable mental models and lessons that we can apply to all facets of life, not least venture capital investing.

First, let me present you with the conclusions, and why the lessons that follow are worth reading. With the most comprehensive study of forecasting to date, Tetlock found that:

One, foresight is real. Some people have it in spades. They aren't gurus or oracles with the power to peer decades into the future, but they do have a real measurable skill at judging how high stakes events are likely to unfold three months, six months, a year or a year and a half in advance. The other conclusion is what makes these superforecasters so good. It's not really who they are. It is what they do. Foresight isn't a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs.

I've tried to distil down some of these key lessons we can take from the superforecasters who showed superior foresight and how they can be applied to investing, but I hope that you can take these principles and find ways of applying them in any number of settings.

Growth Mindset vs Fixed Mindset

Carol Dweck has written the seminal book on the concept of a Growth Mindset, Mindset: The New Psychology of Success.

A “fixed mindset” assumes that our character, intelligence, and creative ability are static givens which we can’t change in any meaningful way, and success is the affirmation of that inherent intelligence, an assessment of how those givens measure up against an equally fixed standard; striving for success and avoiding failure at all costs become a way of maintaining the sense of being smart or skilled.

A “
growth mindset,” on the other hand, thrives on challenge and sees failure not as evidence of unintelligence but as a heartening springboard for growth and for stretching our existing abilities.

This idea has been presented in some form or other by various authors, such as Matthew Syed in 'Black Box Thinking', the key insight being that you are more likely to succeed if you see failure as an opportunity to learn rather than as an indictment of your abilities.

This becomes an issue when your identity and social standing is firmly rooted in people's perception of your professional competence, as Syed points out to be the case in the healthcare system (read Leon Festinger's famous example of cognitive dissonance for more reading).

Superforecasters are unequivocally of a Growth Mindset, learning from mistakes, believing that you can grow to the extent that you are willing to work hard and learn. In investing, doing post-mortems of decisions (to invest, to pass, to follow-on) can be instructive for both teams and individuals. Close examinations can yield lessons that improve investment decision-making processes, methods for soliciting input from all members of the team, any biases that impede objective evaluations, and more.

Another way of thinking about this, even as a founder, is to be in 'perpetual beta' - your product will never have a final version, but only a new version that is intended to be used, analysed and improved without end.

Fermi-ing

The Italian American physicist Enrico Fermi proposed a brainteaser in the 1940s  that forced people to adopt a new way of thinking that is commonplace today:

'How many piano tuners are there in Chicago?'

At the time, most people would respond within seconds with a number that seemed plucked right out of thin air - it was, after all, an obscure fact. The lesson was to break down this question into component parts (Fermi-ing) that allow you to gather the right information to come to an informed answer, such as 'What information would I need to answer this question?' - for example, the number of pianos in Chicago, how often pianos are tuned each year, and so forth. Eventually, the sequence of questions will help you derive an informed guess, which is often surprisingly accurate.

Another way of looking at this is in the context of marginals gains. James Clear, author of Atomic Habits, has written about how Sir David Brailsford transformed Britain's cycling teams through small, incremental changes that together led the team to Olympic glory. Of more interest is how Nobel Prize-winning economist Esther Duflo applied marginal gains to the context of economic development, and, more broadly, randomised controlled trials (RCT) for questions where you cannot run multiple experiments with control groups as is the case with the question of whether foreign aid is improving the economies of African nations.

In cases like this, where it's not possible to do an RCT, you have to replace the larger question with smaller, component questions that are testable, such as whether aid was improving improving educational outcomes (you could do this by running RCTs on those who receive textbooks due to aid versus those who don't).

In the context of investing, we're often trying to predict if the team sat in front of us is the one who will win in competitive market (or succeed in forming a new one) and eventually create a billion dollar company. As with all nonlinear systems, there's too much uncertainty inherent in this sort of judgment. Instead of answering that, we aim to answer smaller questions such as:

  • Is this the best team to solve this problem?
  • Will this founder be able to hire the best talent?
  • How are people using this product already?

Ultimately, we'll never be able to predict regulation, the threat of incumbents, and other longer-term winds with any degree of confidence. So we go with the information we have, and the questions that tease it out.

Dragonfly lenses

Like us, dragonflies have two eyes, but theirs are constructed very differently. Each eye is an enormous, bulging sphere, the surface of which is covered with tiny lenses. Depending on the species, there may be as many as thirty thousand of these lenses on a single eye. Information from these thousands of unique perspectives, flows into the dragonfly's brain where it is synthesised into vision so superb that that the dragonfly can see in almost every direction simultaneously, with the clarity and precision it needs to pick off flying insects at high speed.

Being able to look at a problem from different angles and perspectives is hardly a new idea. No, but practicing active open-mindedness is - do you seek out opposing views in your diet of information? Do you consider your beliefs to be hypotheses that are to be tested? Do you pay attention to those that agree with you or those who disagree with you?

The dialectical method of philosophical argument involves some sort of contradictory process between opposing sides, most commonly associated with the Socratic method (at last this newsletter makes an ode to its namesake). The German philosopher Hegel practiced a method which becomes possible when you take a dragonfly's approach to gathering information:

a thesis-antithesis-synthesis pattern, which, when applied to the logic, means that one concept is introduced as a “thesis” or positive concept, which then develops into a second concept that negates or is opposed to the first or is its “antithesis”, which in turn leads to a third concept, the “synthesis”, that unifies the first two.

One more tool to add to your repertoire is to use base rates, which describe the percentage of a population that demonstrates some characteristic. Daniel Kahneman, Nobel-prize winning economist and author of Thinking Fast and Slow, calls this the 'outside view', as opposed to the 'inside view', which is the specifics of the particular case. Refer to the example of Linda the bank teller here for more context.

How do we apply these lessons to investing? 'Strong convictions, loosely held', as Marc Andreessen once said. As an investor and as an investment team, it's important to hold investment theses that you're seeking to constantly subjecting to scrutiny. Speak to those who hold different views to you on a particular market or business model, update your beliefs regularly and actively calibrate your information diet for diversity. Take base rates for certain behaviours as proxies when market sizing - it's very tempting as an investor to anoint yourself as the consumer of a product and try to imagine if you'd buy it, but you're hardly ever the target customer.

Commander's intent

Both Philip Tetlock and Shane Parrish (the latter in Volume 2 of The Great Mental Models) dedicate some time to learning lessons from the German army and a concept called Auftragstaktik.

Despite the evil nature of the regime that it served, it must be admitted that the Germany Army of World War II was, man for man, one of the most effective fighting forces ever seen. - James Corum, Historian

Helmuth Von Moltke is a famous German military commander that won key victories in the lead up to German unification in the 1870s. His philosophies profoundly influenced the German military that fought both World Wars, which included a keen appreciation for uncertainty, providing a critical thinking education to soldiers, healthy debate and criticism for higher ranking officers, and, most importantly, a decentralised structure.

Auftragstaktik is the idea of sharing the information necessary to "empower subordinate commanders on the scene". The principle stems from the logical idea that those on the grounder are the first to encounter surprises on the evolving battlefield, and can therefore respond quickly. Generals, on the other hand, can see the bigger picture and make strategic decisions. In the German Wehrmacht, captains were told what to accomplish, not how to do it.

As Shane Parrish notes, there are four key elements to commander's intent:

  • Formulate
  • Communicate
  • Interpret
  • Implement

The first two are the responsibility of the senior officer, whilst the latter two are the responsibility of the subordinate on the ground.

As an investor, if your partners have clearly formulated and communicated their goals, it is up to you to interpret them and implement steps to achieve them. This lesson is even more important for founders and employees.

I'll leave you with this most famous quote attributed to John Maynard Keynes:

When the facts change, I change my mind.

This quote is fake. There is no record of him saying this. Time to update your beliefs!

Smart Reads

Here is a collection of shareholder letters from Dan Loeb, successful manager of the hedge fund Third Point (who are raising a VC fund soon).

How to know you've got product-market fit: a list of proxies gathered by the masterful Lenny Rachitsky

API's all the way down - wonderful breakdown of business models and strategies for API-first businesses from Packy McCormick

Another piece from Packy McCormick, this time on Stripe

Why Jeff Morris Jr. invested in RoamResearch

Other stuff

Listen to how FT journalist Dan McCrum unravelled the house of cards that was Wirecard

Quote

“We forfeit three-fourths of ourselves in order to be like other people.” - Arthur Schopenhauer