First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (Location 433)
I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability. (Location 436)
Consider the Indian Ocean tsunami of December 2004. Had it been expected, it would not have caused the damage it did—the areas affected would have been less populated, an early warning system would have been put in place. What you know cannot really hurt you. (Location 478)
Note: really? we may know but not react (see for example the large numbers of people still buying coastal property in the US). poverty removes choice as well.
What I call Platonicity, after the ideas (and personality) of the philosopher Plato, is our tendency to mistake the map for the territory, to focus on pure and well-defined “forms,” whether objects, like triangles, or social notions, like utopias (societies built according to some blueprint of what “makes sense”), even nationalities. When these ideas and crisp constructs inhabit our minds, we privilege them over other less elegant objects, those with messier and less tractable structures (an idea that I will elaborate progressively throughout this book). (Location 570)
The Platonic fold is the explosive boundary where the Platonic mind-set enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan is produced. (Location 579)
Note that I am not relying in this book on the beastly method of collecting selective “corroborating evidence.” (Location 611)
While the cities were mercantile and mostly Hellenistic, the mountains had been settled by all manner of religious minorities who claimed to have fled both the Byzantine and Moslem orthodoxies. A mountainous terrain is an ideal refuge from the mainstream, except that your enemy is the other refugee competing for the same type of rugged real estate. (Location 674)
Note: this goes against out of the mountains
Contagion was the culprit. If you selected one hundred independent-minded journalists capable of seeing factors in isolation from one another, you would get one hundred different opinions. But the process of having these people report in lockstep caused the dimensionality of the opinion set to shrink considerably—they converged on opinions and used the same items as causes. (Location 885)
It is hard for us to accept that people do not fall in love with works of art only for their own sake, but also in order to feel that they belong to a community. (Location 1143)
Table 1 summarizes the differences between the two dynamics, (Location 1229)
we are not immune to trivial, logical errors, nor are professors and thinkers particularly immune to them (complicated equations do not tend to cohabit happily with clarity of mind). Unless we concentrate very hard, we are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it. It is worth a deeper examination here. Many people confuse the statement “almost all terrorists are Moslems” with “almost all Moslems are terrorists.” Assume that the first statement is true, that 99 percent of terrorists are Moslems. This would mean that only about .001 percent of Moslems are terrorists, since there are more than one billion Moslems and only, say, ten thousand terrorists, one in a hundred thousand. So the logical mistake makes you (unconsciously) overestimate the odds of a randomly drawn individual Moslem person (between the age of, say, fifteen and fifty) being a terrorist by close to fifty thousand times! The reader might see in this round-trip fallacy the unfairness of stereotypes—minorities in urban areas in the United States have suffered from the same confusion: even if most criminals come from their ethnic subgroup, most of their ethnic subgroup are not criminals, but they still suffer from discrimination by people who should know better. (Location 1511)
“I never meant to say that the Conservatives are generally stupid. I meant to say that stupid people are generally Conservative,” John Stuart Mill once complained. This problem is chronic: if you tell people that the key to success is not always skills, they think that you are telling them that it is never skills, always luck. Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified. Consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big difference. Zoogles Are Not All Boogles All zoogles are boogles. You saw a boogle. Is it a zoogle? Not necessarily, since not all boogles are zoogles; adolescents who make a mistake in answering this kind of question on their SAT test might not make it to college. Yet another person can get very high scores on the SATs and still feel a chill of fear when someone from the wrong side of town steps into the elevator. (Location 1521)
Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified. Consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big difference. (Location 1523)
In 1971, the psychologists Danny Kahneman and Amos Tversky plied professors of statistics with statistical questions not phrased as statistical questions. One was similar to the following (changing the example for clarity): Assume that you live in a town with two hospitals—one large, the other small. On a given day 60 percent of those born in one of the two hospitals are boys. Which hospital is it likely to be? Many statisticians made the equivalent of the mistake (during a casual conversation) of choosing the larger hospital, when in fact the very basis of statistics is that large samples are more stable and should fluctuate less from the long-term average—here, 50 percent for each of the sexes—than smaller samples. These statisticians would have flunked their own exams. (Location 1539)
An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. Yet my experience discussing this matter with plenty of doctors, even those who publish papers on their results, is that many slip into the round-trip fallacy during conversation. (Location 1560)
they call this vulnerability to the corroboration error the confirmation bias. (Location 1627)
The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented subjects with the three-number sequence 2, 4, 6, and asked them to try to guess the rule generating it. Their method of guessing was to produce other three-number sequences, to which the experimenter would respond “yes” or “no” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. (Note the similarity of this experiment to the discussion in Chapter 1 of the way history presents itself to us: assuming history is generated according to some logic, we see only the events, never the rules, but need to guess how it works.) The correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so they had to offer a series in descending order (that the experimenter would say “no” to). Wason noticed that the subjects had a rule in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up. (Location 1630)
that there is no such animal as corroborative evidence. (Location 1651)
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding. (Location 1723)
To help the reader locate himself: in studying the problem of induction in the previous chapter, we examined what could be inferred about the unseen, what lies outside our information set. Here, we look at the seen, what lies within the information set, and we examine the distortions in the act of processing it. There is plenty to say on this topic, but the angle I take concerns narrativity’s simplification of the world around us and its effects on our perception of the Black Swan and wild uncertainty. (Location 1730)
Note: incorporate this into notes
Since such gambling is associated with their seeing what they believe to be clear patterns in random numbers, this illustrates the relation between knowledge and randomness. (Location 1795)
Note: not sure this is quite right, even in his own terms. entropy is the indicator of the arrow of time. causality is the perception of subsequence due to entropy.
(In a remarkable insight, the nineteenth-century Parisian poet Charles Baudelaire compared our memory to a palimpsest, a type of parchment on which old texts can be erased and new ones written over them.) (Location 1856)
We invent some of our memories—a sore point in courts of law since it has been shown that plenty of people have invented child-abuse stories by dint of listening to theories. (Location 1864)
Joey seemed happily married. He killed his wife. Joey seemed happily married. He killed his wife to get her inheritance. Clearly the second statement seems more likely at first blush, which is a pure mistake of logic, since the first, being broader, can accommodate more causes, such as he killed his wife because he went mad, because she cheated with both the postman and the ski instructor, because he entered a state of delusion and mistook her for a financial forecaster. (Location 1961)
As a skeptical empiricist I prefer the experiments of empirical psychology to the theories-based MRI scans of neurobiologists, even if the former appear less “scientific” to the public. (Location 2089)
The world has changed too fast for our genetic makeup. We are alienated from our environment. (Location 2123)
I will repeat that linear progression, a Platonic idea, is not the norm. (Location 2189)
I mentioned earlier that to understand successes and analyze what caused them, we need to study the traits present in failures. (Location 2470)
At every level of radiation, those that are naturally stronger (and this is the key) will survive; the dead will drop out of your sample. We will progressively have a stronger and stronger collection of rats. Note the following central fact: every single rat, including the strong ones, will be weaker after the radiation than before. (Location 2511)
Note: taleb’s rats
First, justification of overoptimism on grounds that “it brought us here” arises from a far more serious mistake about human nature: the belief that we are built to understand nature and our own nature and that our decisions are, and have been, the result of our own choices. I beg to disagree. (Location 2673)
This idea that we are here, that this is the best of all possible worlds, and that evolution did a great job seems rather bogus in the light of the silent-evidence effect. (Location 2681)
I promised not to discuss any of the details of the casino’s sophisticated surveillance system; all I am allowed to say is that I felt transported into a James Bond movie—I wondered if the casino was an imitation of the movies or if it was the other way around. Yet, in spite of such sophistication, their risks had nothing to do with what can be anticipated knowing that the business is a casino. For it turned out that the four largest losses incurred or narrowly avoided by the casino fell completely outside their sophisticated models. First, they lost around $100 million when an irreplaceable performer in their main show was maimed by a tiger (the show, Siegfried and Roy, had been a major Las Vegas attraction). The tiger had been reared by the performer and even slept in his bedroom; until then, nobody suspected that the powerful animal would turn against its master. In scenario analyses, the casino had even conceived of the animal jumping into the crowd, but nobody came near to the idea of insuring against what happened. (Location 2903)
The errors get worse with the degree of remoteness to the event. So far, we have only considered a 2 percent error rate in the game we saw earlier, but if you look at, say, situations where the odds are one in a hundred, one in a thousand, or one in a million, then the errors become monstrous. The longer the odds, the larger the epistemic arrogance. (Location 3089)
There is no effective difference between my guessing a variable that is not random, but for which my information is partial or deficient, such as the number of lovers who transited through the bed of Catherine II of Russia, and predicting a random one, like tomorrow’s unemployment rate or next year’s stock market. In this sense, guessing (what I don’t know, but what someone else may know) and predicting (what has not taken place yet) are the same thing. (Location 3097)
When you are employed, hence dependent on other people’s judgment, looking busy can help you claim responsibility for the results in a random environment. (Location 3113)
The bookmakers were given the ten most useful variables, then asked to predict the outcome of races. Then they were given ten more and asked to predict again. The increase in the information set did not lead to an increase in their accuracy; their confidence in their choices, on the other hand, went up markedly. (Location 3150)
What matters is not how often you are right, but how large your cumulative errors are. (Location 3235)
The unexpected has a one-sided effect with projects. (Location 3376)
Another example: Say that you send your favorite author a letter, knowing that he is busy and has a two-week turnaround. If three weeks later your mailbox is still empty, do not expect the letter to come tomorrow—it will take on average another three weeks. If three months later you still have nothing, you will have to expect to wait another year. Each day will bring you closer to your death but further from the receipt of the letter. This subtle but extremely consequential property of scalable randomness is unusually counterintuitive. We misunderstand the logic of large deviations from the norm. (Location 3438)
The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. (Location 3464)
I would go even further and, using the argument about the depth of the river, state that it is the lower bound of estimates (i.e., the worst case) that matters when engaging in a policy—the worst case is far more consequential than the forecast itself. (Location 3483)
As I was mentally writing these lines I saw a Time magazine cover at an airport stand announcing the “meaningful inventions” of the year. These inventions seemed to be meaningful as of the issue date, or perhaps for a couple of weeks after. Journalists can teach us how to not learn. (Location 3621)
If you have only two planets in a solar-style system, with nothing else affecting their course, then you may be able to indefinitely predict the behavior of these planets, no sweat. But add a third body, say a comet, ever so small, between the planets. Initially the third body will cause no drift, no impact; later, with time, its effects on the two other bodies may become explosive. Small differences in where this tiny body is located will eventually dictate the future of the behemoth planets. (Location 3730)
This predicament is called “anticipated utility” by Danny Kahneman and “affective forecasting” by Dan Gilbert. (Location 4040)
Operation 1 (the melting ice cube): Imagine an ice cube and consider how it may melt over the next two hours while you play a few rounds of poker with your friends. Try to envision the shape of the resulting puddle. Operation 2 (where did the water come from?): Consider a puddle of water on the floor. Now try to reconstruct in your mind’s eye the shape of the ice cube it may once have been. Note that the puddle may not have necessarily originated from an ice cube. (Location 4060)
randomness is incomplete information, what I called opacity in Chapter 1. Nonpractitioners of randomness do not understand the subtlety. Often, in conferences when they hear me talk about uncertainty and randomness, philosophers, and sometimes mathematicians, bug me about the least relevant point, namely whether the randomness I address is “true randomness” or “deterministic chaos” that masquerades as randomness. A true random system is in fact random and does not have predictable properties. A chaotic system has entirely predictable properties, but they are hard to know. So my answer to them is dual. a) There is no functional difference in practice between the two since we will never get to make the distinction—the difference is mathematical, not practical. If I see a pregnant woman, the sex of her child is a purely random matter to me (a 50 percent chance for either sex)—but not to her doctor, who might have done an ultrasound. In practice, randomness is fundamentally incomplete information. b) The mere fact that a person is talking about the difference implies that he has never made a meaningful decision under uncertainty—which is why he does not realize that they are indistinguishable in practice. Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us. (Location 4093)
What you should avoid is unnecessary dependence on large-scale harmful predictions—those and only those. (Location 4167)
Know how to rank beliefs not according to their plausibility but by the harm they may cause. (Location 4171)
This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty. Much of my life is based on it. (Location 4298)
Note that art, because of its dependence on word of mouth, is extremely prone to these cumulative-advantage effects. I mentioned clustering in Chapter 1, and how journalism helps perpetuate these clusters. Our opinions about artistic merit are the result of arbitrary contagion even more than our political ideas are. One person writes a book review; another person reads it and writes a commentary that uses the same arguments. Soon you have several hundred reviews that actually sum up in their contents to no more than two or three because there is so much overlap. For an anecdotal example read Fire the Bastards!, whose author, Jack Green, goes systematically through the reviews of William Gaddis’s novel The Recognitions. Green shows clearly how book reviewers anchor on other reviews and reveals powerful mutual influence, even in their wording. This phenomenon is reminiscent of the herding of financial analysts I discussed in Chapter 10. (Location 4381)
Merton’s cumulative-advantage idea has a more general precursor, “preferential attachment,” (Location 4391)
The anthropologist, cognitive scientist, and philosopher Dan Sperber has proposed the following idea on the epidemiology of representations. What people call “memes,” ideas that spread and that compete with one another using people as carriers, are not truly like genes. Ideas spread because, alas, they have for carriers self-serving agents who are interested in them, and interested in distorting them in the replication process. You do not make a cake for the sake of merely replicating a recipe—you try to make your own cake, using ideas from others to improve it. We humans are not photocopiers. So contagious mental categories must be those in which we are prepared to believe, perhaps even programmed to believe. To be contagious, a mental category must agree with our nature. (Location 4424)
The long tail’s contribution is not yet numerical; it is still confined to the Web and its small-scale online commerce. But consider how the long tail could affect the future of culture, information, and political life. (Location 4501)
Note: important point that he feels these conditions are becoming more conducive to extremistan
Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks (often Gaussianized in their risk measurement)—when one falls, they all fall.fn4 (Location 4518)
Concentration of this kind is not limited to the Internet; it appears in social life (a small number of people are connected to others), in electricity grids, in communications networks. This seems to make networks more robust: random insults to most parts of the network will not be consequential since they are likely to hit a poorly connected spot. But it also makes networks more vulnerable to Black Swans. Just consider what would happen if there is a problem with a major node. The electricity blackout experienced in the northeastern United States during August 2003, with its consequential mayhem, is a perfect example of what could take place if one of the big banks went under today. (Location 4533)
Note: pandemic k effect
Extremistan is here to stay, so we have to live with it, and find the tricks that make it more palatable. (Location 4564)
Note: maybe try not socialising the impact?
What to Remember Remember this: the Gaussian–bell curve variations face a headwind that makes probabilities drop at a faster and faster rate as you move away from the mean, while “scalables,” or Mandelbrotian variations, do not have such a restriction. That’s pretty much most of what you need to know.fn5 (Location 4656)
After the stock market crash, they rewarded two theoreticians, Harry Markowitz and William Sharpe, who built beautifully Platonic models on a Gaussian base, contributing to what is called Modern Portfolio Theory. (Location 5398)
Today, for instance, pension funds’ investment policy and choice of funds are vetted by “consultants” who rely on portfolio theory. If there is a problem, they can claim that they relied on standard scientific method. (Location 5422)
Note: safety first! may well be a decent explanation here. better to be wrong in company according to standard operating procedure.
But there is another reason for man-made structures not to get too large. The notion of “economies of scale”—that companies save money when they become large, hence more efficient—is often, apparently behind company expansions and mergers. It is prevalent in the collective consciousness without evidence for it; in fact, the evidence would suggest the opposite. Yet, for obvious reasons, people keep doing these mergers—they are not good for companies, they are good for Wall Street bonuses; a company getting larger is good for the CEO. Well, I realized that as they become larger, companies appear to be more “efficient,” but they are also much more vulnerable to outside contingencies, those contingencies commonly known as “Black Swans” after a book of that name. All that under the illusion of more stability. Add the fact that when companies are large, they need to optimize so as to satisfy Wall Street analysts. Wall Street analysts (MBA types) will pressure companies to sell the extra kidney and ditch insurance to raise their “earnings per share” and “improve their bottom line”—hence eventually contributing to their bankruptcy. (Location 5973)
Simply, larger environments are more scalable than smaller ones—allowing the biggest to get even bigger, at the expense of the smallest, through the mechanism of preferential attachment we saw in Chapter 14. (Location 6017)
He alerted me to a phrase philosophers use: “distinction without a difference.” Then I realized the following: that there are distinctions philosophers use that make sense philosophically, but do not seem to make sense in practice, but that may be necessary if you go deeper into the idea, and may make sense in practice under a change of environment. (Location 6085)
So the rarer the event, the less we know about its role—and the more we need to compensate for that deficiency with an extrapolative, generalizing theory. It will lack in rigor in proportion to claims about the rarity of the event. Hence theoretical and model error are more consequential in the tails; and, the good news, some representations are more fragile than others. (Location 6616)
I showed that this error is more severe in Extremistan, where rare events are more consequential, because of a lack of scale, or a lack of asymptotic ceiling for the random variable. (Location 6619)
Let me provide once again an illustration of Extremistan. Less than 0.25 percent of all the companies listed in the world represent around half the market capitalization, a less than minuscule percentage of novels on the planet accounts for approximately half of fiction sales, less than 0.1 percent of drugs generate a little more than half the pharmaceutical industry’s sales—and less than 0.1 percent of risky events will cause at least half the damages and losses. (Location 6622)
This problem of confusion of the two arrows is very severe with probability, particularly with small probabilities.fn4 (Location 6637)
For environments that tend to produce negative Black Swans, but no positive Black Swans (these environments are called negatively skewed), (Location 6647)
the problem of small probabilities is worse. Why? Clearly, catastrophic events will be necessarily absent from the data, since the survivorship of the variable itself will depend on such effect. Thus such distributions will let the observer become prone to overestimation of stability and underestimation of potential volatility and risk. (Location 6648)
The same is true of airplane rides. We have asked experimental participants: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, if you fly once a year, there will be on average one crash every thousand years on this airline. If you don’t take the trip, it is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would. But when we changed the second sentence so it read, “Safety statistics show that, on average, one in a thousand flights on this airline have crashed,” only 70 percent said they would take the flight. In both cases, the chance of a crash is 1 in 1,000; the latter formulation simply sounds more risky. (Location 6730)
What Is Complexity? I will simplify here with a functional definition of complexity—among many more complete ones. A complex domain is characterized by the following: there is a great degree of interdependence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this interdependence, mechanisms are subjected to positive, reinforcing feedback loops, which cause “fat tails.” (Location 6737)
In lay terms, moves are exacerbated over time instead of being dampened by counterbalancing forces. Finally, we have nonlinearities that accentuate the fat tails. (Location 6742)
Let us look again, from a certain angle, at the problem of “induction.” It becomes one step beyond archaic in a modern environment, making the Black Swan problem even more severe. Simply, in a complex domain, the discussion of induction versus deduction becomes too marginal to the real problems (except for a limited subset of variables, even then); the entire Aristotelian distinction misses an important dimension (similar to the one discussed earlier concerning the atypicality of events in Extremistan). Even other notions such as “cause” take on a different meaning, particularly in the presence of circular causality and interdependence.fn7 (Location 6747)
I tried to explain the problems of errors in monetary policy under nonlinearities: you keep adding money with no result … until there is hyperinflation. Or nothing. Governments should not be given toys they do not understand. (Location 6771)
Psychologists distinguish between acts of commission (what we do) and acts of omission. Although these are economically equivalent for the bottom line (a dollar not lost is a dollar earned), they are not treated equally in our minds. (Location 6870)
I have also in the past speculated that religion saved lives by taking the patient away from the doctor. (Location 6905)
What is interesting is that the ancient Mediterraneans may have understood the trade-off very well and may have accepted religion partly as a tool to tame the illusion of control. (Location 6907)
PHRONETIC RULES: WHAT IS WISE TO DO (OR NOT DO) IN REAL LIFE TO MITIGATE THE FOURTH QUADRANT IF YOU CAN’T BARBELL? (Location 6917)
Have respect for time and nondemonstrative knowledge. (Location 6922)
Bankers get rich in spite of long-term negative returns. (Location 6926)
Avoid optimization; learn to love redundancy. (Location 6930)
Find the smart people whose hands are clean. (Location 6979)
Note: this is v “cummings”. smart people who have expertise?
Economic life should be definancialized. We should learn not to use markets as warehouses of value: they do not harbor the certainties that normal citizens can require, in spite of “expert” opinions. (Location 6999)
The highly expected not happening is also a Black Swan. Note that, by symmetry, the occurrence of a highly improbable event is the equivalent of the nonoccurrence of a highly probable one. (Location 8894)
The Black Swan is the result of collective and individual epistemic limitations (or distortions), mostly confidence in knowledge; it is not an objective phenomenon. (Location 8897)
It is much easier to deal with the Black Swan problem if we focus on robustness to errors rather than improving predictions. (Location 8902)
Even if they had one hundred and twelve risk managers, there would be no meaningful difference; they still would have blown up. Clearly you cannot manufacture more information than the past can deliver; if you buy one hundred copies of The New York Times, I am not too certain that it would help you gain incremental knowledge of the future. We just don’t know how much information there is in the past. (Location 8965)
Clearly, weather-related and geodesic events (such as tornadoes and earthquakes) have not changed much over the past millennium, but what have changed are the socioeconomic consequences of such occurrences. Today, an earthquake or hurricane commands more and more severe economic consequences than it did in the past because of the interlocking relationships between economic entities and the intensification of the “network effects” that we will discuss in Part Three. Matters that used to have mild effects now command a high impact. Tokyo’s 1923 earthquake caused a drop of about a third in Japan’s GNP. Extrapolating from the tragedy of Kobe in 1994, we can easily infer that the consequences of another such earthquake in Tokyo would be far costlier than that of its predecessor. (Location 8987)
While looking at the past it would be a good idea to resist naïve analogies. Many people have compared the United States today to Ancient Rome, both from a military standpoint (the destruction of Carthage was often invoked as an incentive for the destruction of enemy regimes) and from a social one (the endless platitudinous warnings of the upcoming decline and fall). Alas, we need to be extremely careful in transposing knowledge from a simple environment that is closer to type 1, like the one we had in antiquity, to today’s type 2, complex system, with its intricate webs of casual links. Another error is to draw casual conclusions from the absence of nuclear war, since, invoking the Casanova argument of Chapter 8, I would repeat that we would not be here had a nuclear war taken place, and it is not a good idea for us to derive a “cause” when our survival is conditioned on that cause. (Location 9064)
One consequence of the absence of “typicality” for an event on causality is as follows: Say an event can cause a “war.” As we saw, such war will still be undefined, since it may kill three people or a billion. So even in situations where we can identify cause and effect, we will know little, since the effect will remain atypical. (Location 9276)