"The Machine That Won the War"
A deep dive into Isaac Asimov's 1961 short story, which holds some clues about the opportunity (or threat) of AI.
I recently read a copy of Isaac Asimov’s “The Machine That Won the War,” and while the two hashtag-worthy words artificial intelligence are not mentioned anywhere within the short story, I couldn’t help but connect the dots.
The Machine in the story is called Multivac (remember this was written in 1961) and it’s something like Alan Turing’s Bombe Machine that decoded the Nazi’s Enigma, popularized by the 2014 film The Imitation Game with Benedict Cumberbatch (think room-sized computer of a hundred years ago, but much, much bigger-er).
For the first time in a decade, technicians were not scurrying about the vitals of the giant computer, the soft lights did not wink out their erratic patterns, the flow of information in and out had halted.
A conversation ensues among the three main decision-makers for Multivac—Lamar Swift, John Henderson, and the “imperturbable” Jablonsky, or Chief Interpreter for Multivac. Or rather, you could say Swift, Henderson, and Jablonsky are the departmental lead reviewers of decisions Multivac was making in terms of what targets to strike when, feints, resources, production capacity, and so forth, in support of the war effort against an undescribed alien race, the Denebians.
The war had not been an easy one, not with interstellar warps that could “swallow a planet clean, if aimed correctly.” Even a supercomputer like Multivac couldn’t shorten the many long years of toil and danger and sacrifice, of entire generations looking up at the stars not with hope but only anxiety.
But the Denebians are finally defeated, wiped out, unable to threaten humanity ever again. The rest of humanity spread across the Earth, the Moon, Mars, and Titan celebrates a once-in-a-species victory. “Thanks to Multivac,” says Swift.
Jablonsky interjects: "Well, that's what they say."
Swift asks if he’s jealous. “Because they're shouting for Multivac? Because Multivac is the big hero of mankind in this war? . . . What's that to me? [says Jablonsky] Let Multivac be the machine that won the war, if it pleases them.”
The secrets start to spill. Henderson’s next in confessing his own sense of guilt, despite, or in spite of, the ongoing victory celebrations: "Multivac had nothing to do with victory. It's just a machine."
“No better than the data fed it,” he adds, as he explains the hundred subsidiary computers used to feed data to Multivac, as well as problems with logistical delays in data retrieval from Titan and the worries about unexpected bias should they use such data.
‘Do you know,’ said Henderson, ‘to what extent data concerning our production capacity, our resource potential, our trained manpower, everything of importance to the war effort, in fact had become unreliable and untrustworthy during the last half of the war? Group leaders, both civilian and military, were intent on projecting their own improved image, so to speak, so they obscured the bad and magnified the good. Whatever the machines might do, the men who programmed them and interpreted the results had their own skins to think of and competitors to stab. There was no way of stopping that. I tried, and failed.’
Probed more about it, Henderson reveals that he had been manipulating the data fed into Multivac. But Jablonsky, as ever, seems unperturbed. In fact, he smiles, and admits that Multivac—the warehouse-sized machine that it is—was, in truth, not even in working order.
‘Not in working order?’ said Swift.
‘Not really. Not reliably . . .’
Swift leaned back in the chair and stretched his legs out before him. ‘Such revelations. It turns out then that the material handed me to guide me in my decision-making capacity was a man-made interpretation of man-made data. Isn't that right?’
And Swift, the man laden all those long years of war with the weightiest responsibility of all—of delivering every single final report from Multivac to the political say-sos—reveals the last and greatest secret. (But I’ll let you read it to find out.)
My point being that Asimov’s story reminds me our own current hullabaloo to do with AI. Everything from predictions of Terminator-style Armageddon, to chuckles over the latest AI-generated meme on social media, to the use of AI in targeting systems for intercepting missiles (which AI is kind of perfect for, i.e., locating hundreds of moving targets all at once), not to mention the ongoing use of AI by the Israeli Defense Forces to develop bombing targets in Gaza.
Yes, the age of AI is here, but how worried should we really be?
I refer to this quote from an interview with another science fiction writer, Ted Chiang, featured in LA Review of Books. When asked whether AI systems could, one day, have human characteristics, Chiang states:
It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words “Baby don’t hurt me” on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything.
And that’s how I think of AI which is something of how Asimov perhaps thought of AI too, or towards relying on any machine to do our thinking for us. People might even believe that a very special, very expensive “thinking” machine, indeed, won an interstellar war. But what are people’s beliefs—humanity’s beliefs—if not chockfull of flaws and biases?
An AI program—whatever its use and whoever designs and sells it—still needs to be set up and put to work by an actual human being. And that is the era we are living in, which, come to think of it, sounds a lot like the nuclear era, or the era of precision-guided munitions, or the era of satellites and global-positioning systems, or the era of radio, or the era of television, or the era of gunpowder. . . . This is to say we’ve developed a new and slightly worrying technology, yes, but it’s not because of what that technology might do of its own volition, but what we, the same ol’ human beings we’ve always been, might do with it.