The 50/50 Murder
A thought experiment on how we measure risk when the fate of the world hangs in the balance
This is the first installment of “Privatizing the Apocalypse”, a four-part essay to be published throughout October.
Imagine a brilliant would-be killer arranges the following scenario:
A coin will be flipped. If it’s heads, someone dear to you will die. If it’s tails, nothing happens. The bad guy — who is very bad indeed — will eagerly cheer for heads. But whatever the outcome, he’ll accept the coin’s verdict.
Let’s add that if it’s heads, the death will be 100 percent certain. But it will also be instant, painless, and without warning. And if it’s tails, your loved one will never know about any of this. Nor will you, or anyone. In other words, no PTSD. Nor even the faint, fleeting trauma of a rollercoaster ride.
And so, the coin is flipped, and… huzzah, it comes up tails! The bad guy is pissed. But rules are rules, so your loved one lives.
Years pass. Then one day, some ingenious cops discover all of this. Being geniuses, they’re also able to establish — with full certainty — that the bad guy will never do this again. Indeed, he poses absolutely no threat to society.
Given all this, was a crime committed? And should we lock the bastard up?
If your gut is screaming YES!!! I agree, as would almost anyone. That monster put your bestie, kid, or partner in horrible jeopardy. For fun!
Now, does the nature and severity of the crime change if the odds of death shift away from 50 percent?
I would say yes, if they move a lot. For instance, if the would-be victim squeaks past a 99 percent chance of death, prosecutors would tend to view it as attempted murder. But with a 1 percent chance of death, many would question whether the villain truly wished anyone harm and the charge might be something like reckless endangerment.
Intention and mindset matter more as the odds of a bad outcome plummet. Even imposing a one-in-ten-million chance of death feels criminal, if the bad guy’s praying like a Mega Millions ticket holder for the long shot. Whereas, if he deeply hopes that no one dies, this ceases to be a crime at some point. Particularly if there’s something in it for him other than a sadistic thrill.
That just sounds selfish — but we’ve all made similar tradeoffs.
This sounds strange, I know. But the perpetrator’s joy in the game is part of what makes it odious. So instead, suppose this guy would hate for anyone to die, or even stub a toe — but he desperately wants some Doritos. And to get his snack, he’s fine with making society bear a slim chance that someone croaks.
Now, that just sounds selfish — but we’ve all made similar tradeoffs. Like, a lot of them. For instance, if you’re American, your country racks up about 400 billion car rides per year, at the cost of 40,000 or so road deaths. So regardless of who’s at the wheel — be it you, Mom, or Lyft — a 10-million-to-one game of Russian Roulette kicks off whenever you cause a car to roll.
This shows us that when the odds of a calamity flirt with zero, we’ll serenely court outcomes as awful as death. Daily life would be impossible otherwise. No one likes to dwell on this reality. But it doesn’t violate our intuitions, because we realize that countless people die in the midst of truly mundane tasks.
Far more chilling and less intuitive is the fact that long-shot, all-or-nothing bets are now placed on a global level too — with humanity itself the de facto wager. Such a bet was first faced and considered in 1942. It was analyzed methodically. Then three years later, the bet was placed. The odds of a disaster were on the low side (one in 3 million, maximum). But the stakes were towering.
The gamblers were running the Manhattan Project. The risk wasn’t a nuclear war (yet), but that our atmosphere might burn up in a chain reaction triggered by their first atomic test. This prospect was first raised by Edward Teller, who later became the father of the hydrogen bomb. Robert Oppenheimer, who would soon lead the Los Alamos lab, called it a “terrible possibility”.
For his part, the head of the project’s theoretical unit “found that it was just incredibly unlikely.” But that sort of language is more comforting when, say, discussing a big softball game. So top people were convened to assess the danger. And confidence mostly reigned by the time of the test.
We now know this confidence wasn’t misplaced — so hats off to the team for getting it right! Although they were kind of right by fiat, since no one would be here to call them out if they’d blown it. It’s also worth noting that Enrico Fermi took bets on the burnt-sky scenario on the big day. Although he was joking, he scared the bejesus out of the enlisted men at the test site, none of whom could parse the reassuring math.
But not everyone put the odds at zero. The one-in-three-million estimate came from Arthur Compton, who oversaw the project’s plutonium production. And I’d say he was as smart as anyone there (Compton won the Nobel Prize for specifying light’s quantization from assumptions about the subatomic interactions of X-ray photons and their scattering angles — a sentence I don’t even understand).
As there wasn’t full consensus on the test’s utter safety, the team de facto accepted the small chance that they might incinerate the sky, and cancel the future. Could we say they had a moral basis for doing this?
To be clear, I’m asking about the decision to proceed with building, then later testing a nuclear device, and not the bomb’s subsequent wartime use. Hard as it is for us to separate the two, the scientists faced the risk of an atmospheric ignition in 1942. A working bomb was years off at the time, the war’s outcome was unknowable, and Germany was at least as menacing as Japan.
I’ll add that nuclear fission had first been discovered just a few years before — by German scientists. Allied intelligence also knew that Germany started its own atomic bomb project almost immediately thereafter. Subsequently, the Wehrmacht’s conquest of Norway put the world’s sole source of heavy water in Nazi hands. Though the German bomb project failed, there was no way to foretell this when Oppenheimer’s team assessed the atmospheric risk. Abandoning their project because of it would have therefore carried another risk: Hitler gaining a nuclear monopoly.
Some would contend the Los Alamos team was immoral to risk torching the sky despite all that. However, it would be very hard to argue that they were selfish for it. They each faced the same doom as everyone else if things went badly, and no extravagant rewards if things went well. In other words, gambling with humanity’s fate — or refraining from doing so — was a public service back then. And any upside from gambling successfully was a public good.
But how would you feel about all of this getting privatized? Both the act of gambling with humanity’s existence, and the payoff on the bet, if things work out?
Odd as that prospect may sound, 2008’s financial crisis is a good analog for it. This meltdown occurred after some of our most trusted and lionized financiers figured out how to use the global banking system as collateral for certain risky bets. Like many risky bets, these generated fat returns when things went fine. Returns which were converted into mega yachts, Picasso canvases, and other bare essentials. Then when things went badly, the gobbling clique at the trough expressed shock — shock! — then handed the $20 trillion bill to the rest of us.
Robert Oppenheimer would be less popular with historians if, for example, he’d swapped a slight chance of erasing the sky for a wildly enriching IPO. Of course, he didn’t (and he almost certainly wouldn’t have, if granted that bizarre opportunity). But what about those asshole bankers? Or Charlie Manson? Or Orlando mass murderer Omar Matteen?
We may find out. Because existential risk is being privatized. Yes, really — and the dice have already been thrown at least once. What’s uncertain is whether placing these bets will remain the exclusive domain of a narrow elite (as in the reassuring case of the financial crisis), or if it will gradually diffuse to any homicidal loser with a death wish. To analyze the bets themselves, a tool used in finance and gambling can be useful. It’s called expected value or EV.
As an example: imagine a certain bet has 75 percent odds of returning $4, and 25 percent odds of returning $1. To find its EV, we apply the first probability to the first value, and the second one to the second. Specifically:
- 75% of $4 = $3.
- 25% of $1 = 25 cents
This makes the bet’s total expected value $3.25.
It feels creepy to use this sort of math on human lives instead of dollars. But actuaries and liability courts do it all the time so let’s apply it to the case of the 50/50 murder. By triggering a 50 percent chance of one death, and a 50 percent chance of zero deaths, our villain incurs an expected cost of half a death. And he does this for the sheer joy of it. Our intuitions scream this is a crime, and any court of law would agree.
But we would feel differently if he imposed that tiny risk on someone for kicks, rather than for a reasonable practical purpose.
In the Doritos scenario, we can say our hungry driver triggers a one in 10 million chance of one death, and an inverse chance of zero deaths for an expected cost of one ten-millionth of a death. Whether or not he’s aware of these numbers, society de facto accepts them, and the SWAT team won’t pounce when he heads out to the deli. But we would feel differently if he imposed that tiny risk on someone for kicks, rather than for a reasonable practical purpose.
If we assume Arthur Compton’s reasoning was sound, we can derive the atomic test’s EV from the 2.5 billion people alive at the time. There’s a one-in-three-million chance that everyone dies, and 2.5 billion divided by three million is 833. Plus there are huge odds that no one dies — and any probability times zero deaths equals zero. Adding the two products, the net expected death toll using Compton’s odds is 833 (though, of course, that doesn’t account for the resultant loss of future generations). Does this mean the New Mexico test was morally equivalent to condemning 833 random people to certain death?
That may feel intuitively wrong to you. If so, your intuition is supported by the apparent lack of moral angst amongst Fermi, Teller, and Oppenheimer over the initial test (the later wartime use of the bomb was another matter). But they’re the ones who put the odds of a catastrophe at zero. Whereas Compton, who didn’t, felt quite differently. Even with the certitude of hindsight, he later declared, “Better to accept the slavery of the Nazis than to run the chance of drawing the curtain on mankind!”
As this is a matter of personal values, Compton doesn’t seem foolish to me in retrospect. Nor do his colleagues seem evil.
In his ingenious 2003 book, Britain’s Astronomer Royal, Martin Rees, applies a similar, and immensely provocative analysis to a certain experiment conducted at the CERN and Brookhaven supercolliders. It incurred a non-zero chance of destroying the Earth, and perhaps the entire universe. And for those who are skimming this article, I repeat:
Destroying. The motherfucking. Universe.
Yes, really. The experiment created conditions that had no precedent in cosmic history. As for the dangers, Rees characterizes “the best theoretical guesses” as being “reassuring.” However, they hinged on “probabilities, not certainties” and prevailing estimates put the odds of disaster at one in fifty million.
From the dawn of time until the start of the Cold War, no one was in a position to risk things on this scale.
Those are mighty slim — but these guys weren’t preventing a global fascist takeover. They were running a clever experiment. A really clever one, I’m sure! But one without obvious practical benefits. In light of this, Rees turns our attention away from the slimness of the odds, to their expected value. Given the global population at the time, that EV was 120 deaths (note: I posted a 75-minute interview with Rees right here today, in which we discuss many issues connected to this article).
Imagine how the world would treat the probabilistic equivalent of this. I.e. a purely scientific experiment that’s certain to kill 120 random innocents. It would never be allowed. No — not even if it was really, really, really clever.
Personally, I don’t put the experimenters in the criminal category of the hedgetards who wrecked the world economy in 2008. But that’s partly due to my own biases. Because unlike hedgefunding, I view science as a noble calling. Yet despite being a science fanboy, I see some unsettling parallels.
For starters, the benefits from the experiment were mostly private goodies rather than public goods. The folks behind it didn’t become billionaire Wall Street parasites or anything. But you can bet their involvement benefited their careers. Plus, their brains were configured to comprehend and delight in the resulting discoveries. This had to be way more satisfying than any bag of Doritos so it easily merited a one-in-fifty-million risk for each of them personally.
But unlike a daredevil getting a rush from skiing down a forested hill, say, they weren’t just putting themselves on the line. They were chancing it with you and all your loved ones — and maybe Pluto, the Andromeda galaxy, and anyone living out on Hubble’s faintest smudge. You don’t have to be a Luddite or creationist to find that selfish.
The precedent of a cloistered elite arrogating permission to gamble with human annihilation is also chilling. This was nothing like 1942, when that risk was accepted at the top tier of a vast democracy. And even if we give the particle physicists a pass, (because, Science!), does that mean we’re okay with a second group taking a similar chance tomorrow? Or giving each of a thousand brainy groups their own shot next year? Green-light enough experiments, and the odds of catastrophe mount alarmingly.
And by the way, who operates the green light? The experimenters might say that only a tiny handful of their peers have the background knowledge necessary to assess the risk. Which would probably be correct. But follow that path far enough, and a narrow clique of insiders take charge of assessing, taking, and benefitting from risks that imperil us all. In other words, these risks and their upside will be privatized. If that’s bad when the world economy is at stake, and boy, is it, how do we feel when humanity’s future is on the line?
From the dawn of time until the start of the Cold War, no one was in a position to risk things on this scale. For the past few decades, only a tiny handful of people have been in that position. As I’ll discuss in this essay’s second installment, posted next week, this cadre has been tiny and fairly well-vetted thus far. But without extreme care and foresight, that group’s population will soon explode and its admission standards will drop close to zero.