In Charles Baxter’s snarky comic novel Blood Test, the narrator enrolls in a clinical trial that purports to predict his future. With the use of biomarkers, internet browsing history, surveillance data, survey results, and more, a team of scientists claim to possess nearly perfect foresight about the narrator’s impending behaviors.
But also, these scientists disclose their predictions to the narrator. This introduces an element of choice – like Kronos or Oedipus in Greek mythology, the narrator might attempt to subvert the prophecies.
Indeed, as soon as the narrator’s daughter hears about the clinical trial, she says,
“It’d be fun for you to screw it up.”
“What do you mean?” I ask her.
“Well, like, the algorithm tells you that you’re going to go to Cancun or wherever, and … I don’t know, you murder some little skank in Utah instead of going to Cancun. Surprise!”
“I’m not going to murder anybody,” I tell her.
“Wait and see. You haven’t done it yet,” she says. “But what if … what if the machine says that you’re actually going to murder somebody, and then you do it, because you say you couldn’t help it? Because it had been predicted and everything? It was, like, science? You couldn’t stop your evil bad side from taking over? Which you don’t have, but whatever. You could grow an evil side. The prediction was down in print, what you were going to do. Predicted on the readout. That’s your alibi. You had to do it, because the computer said so. And science is real.”
#
Does the narrator have a choice? Or are our future behaviors predictable based on the current state of the world?
If the Oracle at Delphi had a very precise brain scanner, could Oedipus ever escape the prophecy?
#
So, there’s an obvious rub when we consider whether it’s possible to make accurate predictions about the future – quantum mechanics. Whenever we investigate the world with sufficient precision, we find uncertainty.
This doesn’t mean merely that we lack some essential information, or that our methods of investigating the world will inevitably disrupt the world. Instead, there is uncertainty because certain properties of matter don’t even exist at the same time, like the position of a particle and its momentum.
If an interaction depends upon both these properties – like if we want to know where a particle will be in the future, which will depend on whether or not it bumped into other particles (its position) and which direction it traveled after ricocheting away (its momentum) – then there’s no way to know for sure. Mathematically, the answer is expressed in the form of a “wavefunction,” a function that can be squared to determine the likelihood of any given outcome. There’s no discrete prediction we can make. Only likelihoods.
But sometimes those underlying probabilities will not have much impact on the world. You don’t need to factor in quantum mechanics if you’re trying to catch a baseball! A baseball is made of so many particles that the any strange behavior from one particle is likely to be balanced by the strange behavior of another. All told, the ensemble behavior of matter causes a huge probability peak right near one single answer. That is where the baseball is going to be!
Each individual particle in that baseball can potentially tunnel to the other side of a barrier. This is akin to teleportation: a particle suddenly appears far away from where it seemed to be before, and somewhere that might have seemed impossible to reach.
It’s theoretically possible for every single particle in a baseball to tunnel to the other side of a barrier simultaneously. A baseball could be seen flying toward a catcher’s mitt … but then, instead of striking the mitt, the baseball suddenly appears on the other side of the glove and continues traveling. A befuddled crowd of spectators would watch the catcher tumble to the ground after the ball crashed full-force into the catcher’s mask.
However, the probability of a single particle tunneling across a barrier is often small. The probability of many particles tunneling across that same barrier, all in the same direction, all at the same time, is infinitesimal. Theoretically, it can happen. In practice, it will not.
Quantum mechanics tells us that the world is unpredictable … at very small scales. Statistical mechanics tells us something different: for large objects, our predictions are often going to be correct. Usually there’s not some nerd out on the baseball field with a high-speed camera and a computer, but if there were, and that nerd ran a simulation to predict the exact path along which the ball would travel, you’d better hope that’s where the catcher holds the glove.
#
But then … our brains.
Our choices.
Are we free?
Are the vital aspects of our brains small like particles, or big like baseballs?
#
Some scientists believe that neurons are big enough that the probabilistic effects of quantum mechanics will not affect consciousness, and yet also believe that we have free will.
They’re wrong.
I’m not trying to claim that I know precisely how brains work. Nobody does, not yet. But it’s logically inconsistent to claim that you’re free to make choices, yet also that outsiders could predict all of your choices as long as they had sufficient information about your brain. Sorry. If there’s no unpredictability, there’s no free will. Them’s the breaks.
#
Some scientists believe that neurons are big enough that their behaviors can be predicted using classical, non-probabilistic physics … and therefore conclude that we humans don’t have free will. This is logically consistent. If the Oracle can offer you a perfect prophecy, then you’re not really making any choices, no matter how it feels.
The only major qualm I have – besides, you know, the whole shtick about how it feels as though we have free will, since at any moment you could choose to hop up and do a silly dance, the befuddled reactions of your colleagues or co-workers be damned – is that most of these scientists claim to believe that there’s no free will, yet still use language that implies that they do believe in free will, so they mostly sound silly and self-delusional. Writers like Sabine Hossenfelder, whose work I’ve discussed here, or Robert Sapolsky, whose work I’ve discussed here, fall into this category. Sapolsky, in particular, espouses an argument that presupposes that the sort of people who become judges do have free will, and can choose compassion, but that the sort of people who become criminals do not have free will, as though only the lower classes are soulless automatons who could not have acted otherwise.
#
Next, we have scientists who do believe that probabilistic uncertainty from quantum mechanics is relevant for neuronal behavior.
This is not to say that human brains work like quantum computers. They don’t. Quantum computers perform calculations while their components are held in superpositions. If you’re curious about quantum computers, I’ve written about them previously, so I’ll describe this only briefly here. For a quantum computer to work, it needs to be kept in a cold, dark room, in a vacuum, isolated from any particles that might bump into it and force it into one state or another. But our brains are warm and wet. Our brains are full of water, and water bumps into things often relative to the speed of conscious thought.
Still, there’s no need for a whole brain or even a whole neuron to be held in a superposition for quantum uncertainty to make our thoughts unpredictable. For example, if an individual neuron is on the cusp of firing – if there has already been enough of a voltage rise from the movement of sodium ions across a membrane that the voltage-gated ion channels might open – then uncertainty in the position and momentum of a single sodium atom could be enough to cause that neuron to either fire or not.
A tiny bit of uncertainty – a single particle – and yet, a big effect. In our brains, signals are amplified. That amplified signal could propagate through the entire brain and result in very different futures.
Indeed, data exists showing that neurons occasionally fire even when there seems to be no cause. Many neural firings can be predicted; some cannot. Which certainly isn’t proof of the above, but is consistent with it.
Some scientists believe that the uncertainty of quantum mechanics would be sufficient for us to have free will … but “unpredictable” is pretty far from “able to make a choice.” A quantum coin flip like the direction of an electron’s spin is definitely unpredictable, but I don’t think many people would argue that this means that electron gets to make a choice.
#
So then, lastly, there’s magical thinking.
The belief that our brains are unpredictable, and that we have free will. That somehow, through some inexplicable phenomenon that goes against everything that the scientific community has learned so far about the behavior of matter, somehow the inner workings of consciousness can affect the internal probabilities of neuronal behavior.
Does consciousness make us more than unpredictable puppets? Do our thoughts arise not just from the flow of salt ions inside our brains, but rather because we are making choices?
This is the sort of free will that most people intuitively believe in. It’s also what I believe, despite everything.
#
Okay, sure. Free will is fine and dandy, but the title of this essay promised a discussion of “Newcomb’s paradox,” which we still haven’t even described!
But we’re close, I swear!
#
I was volunteering with the high school running team – by which I mean, we were out running and talking about philosophy, because what else are you going to do to distract yourself from the fact that it’s quite uncomfortable to tromp along for seven or eight miles on a hot day – and we were talking about “Roko’s basilisk,” the thought experiment in which an advanced AI immediately exterminates everyone who could have helped to bring it into existence sooner but didn’t.
It’s a somewhat misguided thought experiment – as I’ve written previously, the main reason to fear a self-serving, super-powerful AI is that it would prefer an atmosphere without oxygen – but also interesting enough to occupy our discussion for a goodly number of miles.
“Sure, the threat of punishment,” I huffed out, “might induce someone to work. To create your AI. But the AI only enters existence once. By then, it’s here. It gains nothing by revenge. You’ve read Hamlet, yeah?”
As it happens, only the seniors had. But most of the others had watched The Lion King, so that’s pretty close.
“Revenge is weird, because it’s this …” and, as too often while having these discussions, I found myself wishing that I were in better shape so that I wouldn’t feel so short of breath, “… costly activity, makes your life worse. You want people to think you’d seek revenge, because that keeps them from crossing you. But it’s just the worst if you seem meek, like Hamlet, or Simba … so everyone thinks you won’t seek revenge, and they still push you around … but then you actually do devote your life to trying to get revenge.”
And that is when one of the runners asked about Newcomb’s paradox. Whether an AI that was created because people feared punishment would be free to choose to spare us.
At the time, I said just, “What’s Newcomb’s paradox?”
The runner described it.
“Well,” I puffed, “Newcomb’s paradox is wrong.”
#
Newcomb’s paradox is a thought experiment in which a person arrives at a circus tent and is immediately subjected to a non-destructive brain scanner that predicts that person’s future choices. Then the person is brought to a table where there are two boxes. One box is transparent and contains $1,000. The other box is opaque.
The person is told that they will be allowed to take one or both boxes out of the tent with them, but that the amount of money inside the opaque box was determined by the brain scan. If the person was predicted to take only a single box, then the opaque box has $1,000,000 … but if the brain scan revealed that the person would take both boxes, then the opaque box contains nothing.
Within these constraints, should the player take both boxes?
I mean, yes, the game claims that if the player take a single box, they get a million dollars, and that if they take both, they only get a thousand. But if the prediction was that the player would only take a single box, then there would already be a million dollars inside the opaque box by the time they’re making the choice, and so there’d be no reason to forgo grabbing the extra thousand dollars.
#
Newcomb’s paradox has been discussed extensively by philosophers, which baffles me! I’m the sort of person who will gleefully explain the intricacies of quantum mechanics or evolutionary biology – pedantic excess clearly doesn’t bother me! And yet. I’m flabbergasted that professional philosophers have wasted so much time on this because Newcomb’s paradox is so obviously wrong.
The first way that Newcomb’s paradox is wrong is simple, and honestly rather dull. The paradox is wrong because metaphorical language lets us say nonsensical things. It’s perfectly valid to say “This statement isn’t true,” linguistically, but it has no meaning, semantically.
When the boxes in Newcomb’s paradox are prepared, the problem presupposes that you live inside a world where classical physics can be used to model neural behavior. In this world, you have no free will. But then, when you are choosing a box, the paradox presupposes that you live inside the unpredictable, magical-thinking, free will world.
It’s a bit silly to discuss a “paradox” that relies on changing the laws of physics midway through the scenario.
#
The second way that Newcomb’s paradox is wrong is more compelling – it’s better to tell it as a story.
A grim story: for the paradox to work, the stakes need to be higher.
In this story, your brother is dying. And there is a wizard who lives in a dark spire atop the mountains, looming over your village.
Oh, wait, it gets even worse: the cost to visit this wizard is that you’ll have to cut off your own left hand. You can only visit this wizard’s spire once, and it is horrible to do so.
But you love your brother, so you do it. You make the trek up to the mountains. You leave your bleeding hand in the scale outside the door. The doors creak open – apparently your sacrifice was accepted. After binding your bloody arm, you climb the staircase and enter the antechamber of the wizard.
Once there, the wizard shows you the cure to your brother’s ailment and puts it into a locked chest, then tosses the key out of the window. You lunge forward, grabbing with your one remaining hand … no use. You look down and see the key lying there, at the base of the tower, in the snow.
But then the wizard speaks.
The wizard says, “When you arrived at my spire, I petitioned the spirits to determine whether you were pure of heart. And I will not tell you what I learned, but I will tell you this: if I was told that you are pure of heart, I put both your brother’s cure and a magic gauntlet into this other chest. And if the spirits said that you were selfish, I left that other chest empty.”
“And here is the key,” the wizard says with a flourish, then flings this second key out the window. Again, you lean out to see – there, at the base of the tower, both keys glitter in the snow.
The wizard goes on, “You can not look into the chests. You will not know what is inside them until you leave. And you may leave here with zero, one, or two of these chests. But the spirits knew that if you are pure of heart, you would take only a single chest.”
#
In this story, the problem is one of trust.
Do you trust that the wizard is telling you the truth?
Do you trust your own interpretation of yourself – are you pure of heart? – and do you trust that whatever eldritch voices emanated from the spirit realm would agree? Or will the wizard perhaps know that you are the sort of person who normally would have grabbed both chests, even if on this occasion you restrain yourself and take only the mystery chest … which might therefore be empty?
For the paradox to be interesting, the person has to need the visible prize. In the traditional phrasing of Newcomb’s paradox, with the visible prize being a thousand dollars, the paradox would only work if, for instance, the person making a choice needs a medical treatment that costs a thousand dollars, and without the money in the box they’d have to forgo the treatment.
The other problem with the traditional phrasing is that $1,000,000 and $1,001,000 are essentially the same number. They’re both ten to the sixth power. I get that it might seem a little shabby to claim that the minuscule vagaries of quantum mechanics are important for the inner workings of brains, and then also claim that adding a thousand dollars to a million dollars leaves the prize amount essentially unchanged, but so it goes. Context matters – there’s no single right way to approach problems in science or mathematics, let alone philosophy.
#
In the wizard story, if I were telling it, the hero who had traveled to the spire and sacrificed a hand for the sake of entry would take only a single chest.
Unlike the traditional answer given for Newcomb’s paradox, though – you should take only the opaque box, because why would you risk losing a million dollars to gain a mere thousand dollars more? – the hero would take only the chest with the visible cure. As though the answer to Newcomb’s paradox were to take only the thousand dollars. Because even though it would be really nice to have that magic gauntlet, within the context of the story, someone who is pure of heart would take only a single chest, and the cure for the brother’s ailment is too important to leave behind.
The explanation for why the hero should take only that single chest – the chest that definitely has no magic gauntlet inside! – is that humans are irrational. Our irrationality is actually a fundamental virtue of our species. Indeed, the world is in so much trouble right now largely because it is much easier for humans to behave rationally when they are anonymous, and the modern world allows for an awful lot of anonymous action.
When we’re inside cars, it’s difficult for other people to see us. To recognize exactly who we are inside that contraption of metal and glass. And so most people are much more likely to behave rudely while driving than while walking down a sidewalk.
While we’re online, we’re anonymous. While we’re acting as a cog in the machinations of an enterprise much larger than ourselves, we’re anonymous. From a hedge fund office, you’ll have no face-to-face contact with the people getting fired if you restructure a far-off corporation.
Also, some people are inclined to behave more rationally than others. In the modern world, it’s become easier for such people to find each other and construct rational systems together. The threat of social opprobrium, which once upon a time might have dissuaded someone from strip mining a beautiful mountainside, is not going to have any sway if that person’s social circle consists entirely of other rational extractive capitalists, rather than the local community of people who will have to witness the devastation.
I often disagree with Daniel Dennett’s arguments about free will, but I really like his description of the evolutionary biology underlying human emotions and our fundamental irrationality:
“When evolution gets around to creating agents that can learn, and reflect, and consider rationally what they ought to do next, it confronts these agents with a new version of the commitment problem: how to commit to something and convince others you have done so. Wearing a cap that says ‘I’m a cooperator’ is not going to take you far in a world of other rational agents on the lookout for ploys.
According to [Robert] Frank, over evolutionary time we ‘learned’ how to harness our emotions to the task of keeping us from being too rational, and – just as important – earning us a reputation for not being too rational. It is our unwanted excess of myopic or local rationality, Frank claims, that makes us so vulnerable to temptations and threats, vulnerable to ‘offers we can’t refuse,’ as the Godfather says. Part of becoming a truly responsible agent, a good citizen, is making oneself into a being that can be relied upon to be relatively impervious to such offers.
Perhaps you are aware of the game theory scenario called “The Prisoner’s Dilemma.” In this thought experiment, two people have been accused of conspiracy and are being interrogated separately by a totalitarian regime. They’re innocent, but in this dystopian scenario, there’s no avoiding punishment once you’ve come under suspicion from the state. The only question is exactly how severe your punishment will be.
If they both maintain their innocence, they’ll be sent to prison for three years each. If they both confess, they’ll be sent to prison for ten years each. But if someone confesses while the other claims to be innocent, the person who confessed will be sent to prison for only a single year, while the person who claimed to be innocent will be sent to prison for twenty years.
First person says, “We’re innocent!” | First person says, “We’re guilty” | |
Second person says, “We’re innocent!” | First person gets 3 years, Second person gets 3 years | First person gets 1 year, Second person gets 20 years |
Second person says, “We’re guilty” | First person gets 20 years, Second person gets 1 year | First person gets 10 years, Second person gets 10 years |
In the Prisoner’s Dilemma, a rational person would confess. No matter what the other person chooses to do, your own time in prison will be shorter if you say, “We’re guilty.” It’s kind of tragic, because the shared outcome is much worse than if both people maintained their innocence (an extra seven years in prison, each!), but it’s the rational outcome.
In the real world, however, many humans would maintain their innocence. Many humans will make irrational choices, for emotional reasons.
And, sure, if you’re feeling especially curmudgeonly, I suppose that you could concoct arguments about how saying “We’re innocent” would secretly be rational – you could modify the chart of choices and prizes to include how people look at you afterward, or how it feels to have condemned someone else to extra time in prison – but, really, this would just be ad hoc mathematical handwaving to represent the fact that humans have emotions, and our emotions sway us to make choices that are mathematically incorrect.
This human emotional response underlies most people’s intuitive response to Newcomb’s paradox (setting aside for a moment that the problem is wrong, for the reasons described above). We imagine that the wizard would look at us funny if we grabbed both chests, even after the wizard had put both the cure and the magic gauntlet into that second chest because the spirits had sworn that we were pure of heart. And so we make an effort to appear as though we really are good, that we really do deserve the good things that have come our way, and we take only a single chest.
In Newcomb’s original story, with the money and no sick sibling, we take the million dollars and feel satisfied. Even though this is not rational.
And thank goodness for that! Since our emotions help us feel good when we act kindly towards one another – even when a hard-hearted analysis of the numbers might suggest that we could maximize our short-term gains by choosing otherwise – much of the time, we are nice to each other.
As in the Prisoner’s Dilemma, our irrationality helps us build a better world.
