Support First Things by turning your adblocker off or by making a  donation. Thanks!

It’s 2032 and you’re heading into town in your self-driving car. You are alone, asleep in the back seat. Unbeknownst to you, a drama looms ahead. A child runs in front of your autonomous vehicle without warning. The car brakes, but since it is going at a fair pelt, it must swerve as well—to the left or right.

On the left-hand side of the road another car is approaching, driven by Jasmine Jones, a 23-year-old computer programmer who works for Waymo, owned by Google (as it happens, Waymo manufactured your car). Jones has just started her first job, having completed her PhD in Gender Truth. She and her boyfriend Ignatius Pope have just set the date for their marriage, which is to take place in six months’ time, and put down a deposit on a home. Ignatius works as an algorithm formulator for Waymo’s main competitor, Tesla, the market leader.

Walking toward your vehicle on the right-hand sidewalk is retired philosopher and pro-life activist Fred Taylor, who is 72 and has just that day received a diagnosis of prostate cancer. He is a widower and has three adult children, all living long distances away, who call him occasionally but visit rarely. Behind Prof. Taylor is a green hedge; on Jasmine Jones’s side a high concrete wall.

The computer responsible for driving your car, using GPS, lidar, and other sensors, divines all this information in a split nanosecond and steers rightward, knocking down Fred Taylor. He will later be taken to hospital and pronounced dead-on-arrival. You awake to find your vehicle shuddering to a halt with its nose in a Portugal Laurel hedge. 

The self-driving car is coming. We anticipate it with a mixture of bemusement, disquiet, and excitement, all converging in disbelief: It’s not possible, not really. But it is not just possible but a virtual—so to speak—certainty, within perhaps a decade.

Even now we are just beginning to wrestle with the implications. Although there has been a scatter of books on the topic, it has yet to surface in the neocortex of Western society. Academic papers on the implications of artificial intelligence (AI), for example, address the implicit threat to human labor, or wrestle with questions like, “Will intelligent robots acquire human rights?” Even when papers touch on self-driving vehicles, they tend to talk about traffic-flow models or how it will affect the concept of responsibility to supplant human decision-making with cybernetically-formulated algorithms. The occasional study deals with the issue of “artificial moral agents” (AMAs), but these tend to be accompanied by a darkish innuendo. These studies hint that human beings are so morally deficient that it would be beneficial to replace them with entities with artificial morality that implement only what is “good” about humanity.

An even more challenging issue lurks downstream from the responsibility question: How will humanity cope in a world in which there is no recourse to justice, reckoning, or even satisfactory closure after algorithms cause death or serious injury? 

A fully autonomous vehicle is one capable of driving on its own, without human intervention. Humans carried in such vehicles are always passengers. Self-driving vehicles are an example of a new category of machine, in that they have access to public thoroughfares on much the same basis as humans: without constraint of track or rail—hence “autonomous.” Computer-generated movement of machines is a brave initiative for all kind of reasons, and will necessitate radical changes in laws and cultural apprehension.

Self-driving cars use sensors, cameras, and GPS to mediate between the world and the computer. In the event of a situation such as that described above, the car will make a judgment. But how—by what criteria?—is the software to be programmed to make these judgments? How, as dictated by its programming algorithm, should the computer prioritize the information it receives? And will we be able to come to terms with these “decisions” when the outcome involves the death or serious injury of a loved one?

In the approaching dispensation, the word “moral” may need to be amended or replaced. The almost universal human experience of morality is not capable of being comprehensively codified and tabulated by the computer. If we remove moral action from the remit of human beings and vest it in computers, in what sense will we be able to go on calling this morality? Will it be sufficient to incorporate into the algorithm some formulaic version of John Stuart Mill’s principle of utility, or Immanuel Kant’s categorical imperative?

Morality as we currently understand it, for example, incorporates aspects like “respect” and “duty,” which by definition tend toward unselfishness. Can these become part of the self-driving algorithm, or will an automated car, in the manner of a loyal watchdog, be programmed to protect primarily its occupants rather than bystanders or occupants of other vehicles? How far might this be taken, and how transparent will the outcomes be? Will the algorithm take life-expectancy into account? What about employment status? Sexual orientation?

Jeff Brown, a “futurist” and “high-technology expert,” recently hazarded a broad-stroke answer to such questions. “We lose millions of people who die through traffic accidents on an annual basis,” he noted.

Ninety-four percent of those deaths are caused by human error. We can eliminate ninety-four percent of those worldwide deaths. So, while this is an incredibly complex ethical dilemma that we have to solve—and I have a theory about that: that we may actually leave that to the AI to decide, because it’s such a polarizing issue. And, after all, one of the extraordinary things about artificial intelligence is that you can feed it an incredible amount of information and it will make a very accurate decision, but it can’t tell you how it got there. It could consider a thousand different variables and recognize patterns that humans just can’t possibly understand, and come to the correct conclusion. But there’s no way for us to understand why or how.

Brown’s response is remarkably clear—and utterly shocking: We will leave it to the computers to decide, and won’t understand or seek to understand the underlying logics being applied. Self-driving cars, though safer in many respects, will become inscrutable to users, pedestrians, and other adjacent humans. This is in part because of the complexity of the technology and its guiding mathematics, and in part because technological systems operate differently from humans in pursuit of similar ends.

In his 2018 book New Dark Age: Technology and the End of the Future, James Bridle described nuclear fusion experiments conducted by Californian research company Ti Alpha. He related how the company had developed what it called an “Optometrist Algorithm,” combining human and machine intelligence as an optimum method of problem-solving.

“On the one hand is a problem so fiendishly complicated that the human mind cannot fully grasp it,” he writes, “but one that a computer can ingest and operate upon. On the other is the necessity of bringing a human awareness of ambiguity, unpredictability, and apparent paradox to bear on the problem—an awareness that is itself paradoxical, because it all too often exceeds our ability to consciously express it.” Two forms of inscrutability coalescing to reach clarity? I don’t think so.

Algorithmic systems designed in this way, Bridle writes, may become relevant not merely to solving technical problems, but also to solving questions of morality and justice. But there will always be aspects that remain beyond human ken. “Admitting to the indescribable,” he writes, “is one facet of a new dark age: an admission that the human mind has limits to what it can conceptualise.” This “indescribability”—or, in the parlance, “technological opacity”—must, it appears, be accepted with good grace by humanity, perhaps as religious humans accept the will of God. But who will make the choices and decisions and on what bases? How will we know if or how the dice is loaded? By what means will the world be persuaded to participate in a form of transportation that closely resembles Russian Roulette?

The human input into automated cars will occur entirely at the design and building stages, and will involve humans who will be insulated from the consequences of their choices. The assertion that “neutrality” could govern such a process is reminiscent of a certain kind of religious faith, albeit absent certain key elements. Reading between the lines of Bridle’s characterization, it seems the creator-technologists are counting on humanity accepting the outcomes of their algorithms in much the way we have hitherto accepted misfortune as “the will of God.”

But the idea that some concept of “neutrality” will inoculate the self-drive algorithm from public disquiet is based on a misunderstanding of how humanity has always understood God. God was not “neutral” or dispassionate, but always merciful. When bad things happened, even when we lashed out at God for the want of someone else to blame, we understood that his role in the matter was without caprice, arbitrariness, or malice. We accepted even catastrophic outcomes, understanding that, though humans might be unable to penetrate the meaning of what had happened, it was part of a benign and mysterious plan.

This is wholly different from the “indescribability” of the algorithm, which now seeks to insinuate itself as a new eternal mysteriousness. The new deity will decide for us, opaquely; and we will come to accept its “decisions” because they are smarter, or faster, than anything we could come up with ourselves. “Smart” will be the measure of all value.

In these circumstances, of course, we will no longer be able to speak of “accidents,” but merely of “algorithmic misadventure.” The concept of “chance” will acquire a new meaning: something entirely predictable but not by those most implicated. Each incident will result from a pre-programmed response, set in code formulated on the basis of general principles. What are those principles to reflect? Economics? Social utility? Political correctness?

Can we really hand this power over to entities that have already violated public trust to such an extent that it will be impossible for most of us to believe their algorithms are constructed without bias? Will the convenience factor and—if Brown is right—life-saving capabilities of the self-driving car be sufficient to quiet any unsettling thoughts? And will those who have already turned their backs on God as an irrational superstition be prepared to enter a new age of irrationalism based on the “graces” of utility and efficiency, in which they will be even more unknowing than the most simplistic “god-botherer”?

John Waters is an Irish writer and commentator, the author of ten books, and a playwright.

Dear Reader,

You have a decision to make: double or nothing.

For this week only, a generous supporter has offered to fully match all new and increased donations to First Things up to $60,000.

In other words, your gift of $50 unlocks $100 for First Things, your gift of $100 unlocks $200, and so on, up to a total of $120,000. But if you don’t give, nothing.

So what will it be, dear reader: double, or nothing?

Make your year-end gift go twice as far for First Things by giving now.
GIVE NOW

Comments are visible to subscribers only. Log in or subscribe to join the conversation.

Tags

Loading...

Filter Web Exclusive Articles

Related Articles