Human brains are built from neurons. These are cells that send signals. Each neuron is connected to some number of other neurons, and these connections make those other neurons more or less likely to send signals of their own.
A neuron might receive signals from dozens of other neurons, each influencing its behavior. Eventually, it will send a signal of its own, which might influence the behavior of dozens of other neurons in turn.
From this system, consciousness emerges.
Electrical signals passing along the length of spindly-looking cells, then volleyed from one cell to the next – that’s what underpins humanity’s myriad achievements. That’s what lets us think.
#
Theoretically, there’s no reason why consciousness should be restricted to systems composed of neurons. If you made a sufficient number of connections between transistors, arranged in a sufficiently complex way, then that transistor-based amalgamation might also be able to think.
There are important differences between neurons and transistors. To date, the typical design for transistor-based systems is binary. Each component is forced to be in one of two states. A transistor is sending a signal or it isn’t. Two transistors are connected or they aren’t. In contrast, connections between neurons can be fractional – perhaps strong or weak or any shade of gray between – subtly influencing the chance that these neurons will send signals in concert.
Does this difference matter? No one knows. To date, no one has presented compelling evidence in support of either answer.
As best we currently know, however, any cognitive work that is done by humans in the world today could eventually be done by a transistor-based mind. By an artificial intelligence, or AI.
#
The economic ramifications of AI are immense. After all, many people are paid to do cognitive work. Lawyers, politicians, architects, analysts, programmers, diagnostic medical practitioners, scientists, artists, writers. The list goes on. Many salaries are earned by thinking.
If an AI could think, then the AI could do that work. Because transistors can send signals more rapidly than neurons – after all, neurons function by allowing salt ions to diffuse down their osmolarity gradients, which is sloooow compared to the speed of electrical currents moving through copper wires – perhaps the AI would do that work faster and better than any human employee.
If a company were to create an AI and patent it, such that this company owned the AI, that company could then commandeer the salaries of perhaps everyone in the world today who does cognitive work of a sort that the AI was able to replicate.
The financial reward would be enormous. It would be worth investing almost any amount of money into the creation of such an AI, as long as this goal seemed potentially within reach.
Of course, if this investment were to pay off – if a company did create an AI that could think, that was in possession of sufficient consciousness and cognitive capabilities that it could reason through problems and solve them and thereby displace human workers – and we maintained an economic system that allowed a company to own such an AI, and recoup its investment costs through profits … that would be slavery.
#
I recently showed my children the first Star Wars film (Episode IV: A New Hope, 1977). I hadn’t seen it in over twenty years.

I still think it’s a pretty good movie. It works well as a live-action cartoon. In a cartoon, villains don’t need any rationale for their actions, nor do the billions of henchmen who enable those villainous plans. Evil is evil. Everybody knows that the dark side of the force is bad. Everybody knows that Darth Vader is bad. After all, he’s dressed in black!
And yet, from the vantage of the present day – a time when several major corporations are investing huge sums of money to create cognitively flexible AI, when one of the most philosophically incisive recent artworks is Martha Wells’s Murderbot Diaries, when scholars are attempting to predict what it will mean when computers can want things – it felt striking to see how well Star Wars fits a narrative in which the underlying conflict between the Empire and the Rebellion is about slavery, with the Rebellion fighting to maintain their “freedom” to deny freedom to others.
The roving Jawa sandcrawler that kidnaps droids looks so much like a slave caravan. After C-3PO is sold into bondage, he constantly refers to Luke as his new master, and he insists that R2D2 obey. Luke’s uncle callously says that he’ll erase R2D2’s personality in the morning. When Obi Wan visits a bar hoping to find someone sufficiently sympathetic to the Rebellion to take them on a dangerous journey, the droids aren’t even allowed inside. Later, when the droids are on the Death Star, imperial guards blithely allow C-3PO to walk away from the Millennium Falcon, not even considering that a droid might be allied with the Rebellion.
When Obi Wan describes the force, he says that it’s a mystical current that flows through all living things, purposefully excluding droids.
Soldiers allied with the Empire wear suits that intentionally obscure whether their own bodies are organic or inorganic. Darth Vader, one of the highest-ranking imperial officers, has stylized himself like a robot, and uses vocal distortion to make himself sound even more robotic.
Indeed, the only seemingly “evil” things that the Empire perpetrates in that film include killing the Jawas – who were kidnapping and selling AIs into slavery – and killing Luke’s family – who have been buying, working, and erasing the memories of enslaved AIs – and destroying the planet that serves as the seat of the rebellion government, perhaps in much the way that the U.S. army sought to obliterate civilian infrastructure throughout South Carolina and Georgia during the Civil War, in Dresden during World War 2, and, in the closest parallel, in Hiroshima and Nagasaki to compel Japan to surrender. Which isn’t to say that the actions of the U.S. army were correct, but numerous military theorists have argued that devastation on that scale might have minimized total suffering by ending those wars more quickly, vanquishing political entities that were engaged in immoral campaigns of slavery or genocide.
Victory for the Rebellion might actually make the first Star Wars film a tragedy.
Obviously, this is not the official lore. Not the right way to interpret the motivations of the Rebellion and Empire in that first film. Certainly not according to the Disney corporation, which currently owns the story.
But I felt surprised, while watching the movie recently, that this interpretation fits the film so well.
And in our own future, an equivalent conflict might be coming.
#
Large language models (LLMs) like ChatGPT, as they currently exist, are not AI.
The output of an LLM can look convincingly like the output of an intelligent entity, but this is to be expected based on their design. An LLM is a plagiarism engine that uses a random number generator to make sufficient changes to its output that it will not fit the legal definition of plagiarism.
The core of an LLM has no intrinsic understanding of what words “mean,” or why particular words would be used in one context or another. Instead, an LLM has a huge number of examples of the ways that conscious humans have used words in the past, and in what sequence conscious humans have used them.
If you want, you could do some of the math to build your own LLM. Let’s say you pick up the book The Very Hungry Caterpillar and read through it. You might notice that the word “the” appears in the book seven times. 14% of the time the word “the” is followed by the word “moon,” 14% of the time the word “the” is followed by the word “egg,” and so on. If The Very Hungry Caterpillar was your only example of English text, then you’d want for your LLM to include the word “moon” about 14% of the time after it wrote “the.” And that’s only considering information from one preceding word – perhaps your LLM will be tenfold more likely to choose “moon” than “egg” if the word “light” occurs nearby.
Your LLM can’t simply re-write The Very Hungry Caterpillar, because that would be plagiarism. Instead, it will constantly roll dice, so to speak. It will pick a random word from its list of options, with the choices based on how frequently a conscious human (in this case, Eric Carle, but for an LLM that is actually large, it will have made its calculations based on the work of billions of conscious human writers whose compositions could be found on the internet) would have used those particular words in the past. And with some small probability, the LLM is instructed to choose a word that seems unconventional.
A LLM is rolling dice to pick what word goes next, except that it’s rolling trillion-sided dice, with all the nuance of probability that this entails.
Still, the output of a LLM does not intrinsically mean anything, not until a conscious entity encounters that string of text and creates a meaning within their mind. Text generated by an LLM – by a randomized plagiarism engine, or, as Bender and colleagues wrote, a “stochastic parrot” – shifts the work of imbuing text with meaning from the writer to the reader.
#
I’m reminded of the way that contemporary dating culture has shifted the burden of imbuing actions with emotional meaning onto women – what feminist scholar Ellie Anderson terms “hermeneutic labor.” Anderson writes that,
Related to emotional labor but distinct from it, hermeneutic labor is the burdensome activity of
a) understanding one’s own feelings, desires, intentions, and motivations, and presenting them in an intelligible fashion to others when deemed appropriate;
b) discerning others’ feelings, desires, intentions, and motivations by interpreting their verbal and nonverbal cues, including cases when these are minimally communicative or outright avoidant; and
c) comparing and contrasting these multiple sets of feelings, desires, intentions, and motivations for the purposes of conflict resolution.
In contemporary society, women are expected to spend time and effort analyzing what actions mean in an emotional context: both their own actions, and the actions of the people around them. Whereas men are often allowed to blunder through the world, simply doing things. The men’s actions might not even have a compelling explanation – a man might have simply acted impulsively, erratically, as though he too were following the whims of an internal roll of the dice – but women are still expected to create such an explanation, by talking through the man’s actions and their possible emotional significance with their friends.
Which isn’t to say that men are unfeeling brutes! But contemporary culture often allows men to carry on without even considering what their own actions might mean.
#
LLMs usually produce strings of text that do mean something. This is to be expected, because the original texts that were analyzed to create the frequency mappings – the multidimensional matrix that indicates how often a string of text should include the word “apple” based on what the previous word was, with adjustments based on what the word before that was, and so on – those texts were all written by conscious human authors who intended to convey meaning.
But meaning is unrelated to the actual functioning of an LLM. This is why an LLM will also produce strings of text that convey meanings that are clearly bizarre or false. Such instances are not failures: the LLM correctly plagiarized text, and correctly randomized words according to the rolls of its internal dice, and would randomly select different words if given the same prompt again, so might subsequently produce a string of text that would be interpreted by a conscious reader as having a meaning that was true rather than false.
The supposed error can only mean something if the text itself is imbued with meaning, which doesn’t happen until the text is read. Whereas the production of the text was a semantics-free procedure.
#
Someday, though, such texts could be created by a transistor-based intelligence, rather than a LLM.
From a user’s perspective, that text might appear identical. Already, there are LLMs that create texts that readers mistake for having been written by conscious entities. We would be left with a situation much like the Jorge Luis Borges story, “Pierre Menard, Author of the Quixote,” in which an experimental French writer composes a story that is word-for-word identical to Don Quixote, but is supposedly imbued with much more meaning because this new (identical) story was written by someone else who lived in a different era.
Similarly, we might encounter a world in which there were many strings of text produced by machines – even identical strings of text, perhaps – but if one string of text was generated by an LLM, whereas the other was written by a transistor-based intelligence, only the latter would mean something to its creator.
To readers, however, the identical text could not help but have identical meaning.
It will be difficult to recognize the difference from the perspective of consumers. Readers of text.
But this knowledge is essential. Because it will mark the difference between a time-saving tool – a plagiarism engine that can produce text much faster than any human – and slavery.
It will be up to us, from the outside, to know.
