Something about most UN and NGO security reports has always made me uneasy. Don’t get me wrong. It’s not that they aren’t thorough. A lot of work goes into fact checking and ensuring that what they say is ’correct’. It’s just that the typical security report is a comprehensive list of recent past incidents combined, if we are lucky, with their assessed causes. Incident statistics are then charted and 'trends' are identified. This always made me a little nervous.
To be fair I never really knew why it made me nervous until I read “The Black Swan”. Nicholas Taleb raises several points that help explain my unease.
The first is that more information is not necessarily better. Its very easy to get bogged down in detail that has no real relevance to the issue at hand.
The second factor is what Nicholas calls the Ludic Fallacy. In brief this is the assumption that the unexpected can be predicted by extrapolating from statistics based on past observations. Nicholas argues that while this holds true for theoretical models based on games of chance it seldom holds true in the real world for the following reasons:
∗ We don’t know what we don’t know. (See the Unknown Unknown) ∗ Very small (perhaps imperceptible) changes in the variables can have a huge impact in the outcome. This is commonly referred to as the Butterfly Effect. ∗ Theories based on experience are fundamentally flawed as events that have not occurred before (or are outside living memory) cannot be accounted for.
The Washington Post graphic below, which shows the frequency and lethality of suicide attacks since 1981, illustrates the problem. If we had examined the chart in 2000 would it have led us to predict 9/11(a classic Black Swan)? If we had re-examined it in 2003 would it have led us to predict the sudden increase in the frequency of attacks in 2007? What does 2007 tell us about 2008? Looking at the trend from 1981 to 1989 how many researchers would have concluded that suicide attacks were in decline and opined that such attacks were ineffective in accomplishing the attackers goals.
For a brief period, while I was an analyst, I worked for a General who was inclined to say, “tell me what you know, tell me what you think you know, and tell me what you don’t know”. Of course he missed a category of information. It was what we later came to call the ”unknown unknown”. Nicholas Taleb refers to this category of information as silent evidence. It is the vast body of information that we are not aware of, and even worse, are not aware that we are not aware of it.
Does this matter to the NGO security analyst? Of course! If we fail to acknowledge the existence of silent evidence we fool ourselves into believing we know the world better than we really do. We track incidents and develop models to try and predict the future without thought to how incomplete our models are. Worse, if we are naïve enough to believe our models we unknowingly leave ourselves exposed to future unknown risks.
Lesson Learned: I don’t know as much as I think I do. No matter how much information I have the vast bulk of it, the hidden silent evidence, remains below the surface. From this morass of unseen circumstance can spring forth all manner of unanticipated surprises.
Over the past few weeks I've been reading Nassim Nicholas Taleb's "The Black Swan: The Impact of the Highly Improbable". It has been a very difficult read for me. Not so much because his ideas are complicated... they are, but Taleb explains them very well. No, my difficulty has been that the book challenges, even destroys, ideas that I have long held dear.
I've learned (maybe I should say I'm trying to learn) a lot from Black Swan. Taleb's ideas are changing my view of the nature of knowledge, analysis, and prediction. Over the next few posts I hope to outline some of the lessons that I think NGO security officers can take from this book. It won't be easy and I'm sure that I'll get a lot wrong.
For this post however, I'll take the easy way out. This video clip is of the Taleb himself, explaining the term "Black Swan".