I’ve been re-reading Taleb, again. In Antifragile, I found an exceptionally well-written part that I hadn’t noticed before:
People go to business school to learn how to do well while ensuring their survival—but what the economy, as a collective, wants them to do is not survive, rather to take a lot, a lot of imprudent risks themselves and be blinded by the odds. Their respective industries improve from failure to failure. Natural and nature like systems want some overconfidence on the part of individual economic agents, i.e., the overestimation of their chances of success and underestimation of the risks of failure in their businesses, provided their failure does not impact others. In other words, they want local, but not global, overconfidence.
This is a simple, but powerful idea. The first parallel I drew was how we build software, and deal with incidents.
We want teams to be locally overconfident in the things they build: it is usually a pre-requisite of building anything great. Along the way, things go wrong: bad code is deployed, cables are cut, leap years occur. When incidents occur, a natural step is to slow the rate of change, introduce new process, and the like.
When doing this, we should be wary of any steps that reduce the ‘local overconfidence’ of teams. Mediocrity will ensue. Instead, we should focus our energy on avoiding contagion and systemic failure wherever possible. Creating a culture of accepting regular, small, localised and isolated failure within teams allows them to have limited downside, whilst keeping all of the upside.