Support First Things by turning your adblocker off or by making a  donation. Thanks!

The technologies referred to as “artificial intelligence” or “AI” are more momentous than most people realize. Their impact will be at least equal to, and may well exceed, that of electricity, the computer, and the internet. What’s more, their impact will be massive and rapid, faster than what the internet has wrought in the past thirty years. Much of it will be wondrous, giving sight to the blind and enabling self-driving vehicles, for example, but AI-engendered technology may also devastate job rolls, enable an all-­encompassing surveillance state, and provoke social upheavals yet unforeseen. The time we have to understand this fast-moving technology and establish principles for its governance is very short. 

The termAI” was coined by a computer scientist in 1956. At its simplest, AI refers to techniques that combine data and algorithms to produce a result. Those techniques can be as simple as Google Maps digesting traffic data to provide the fastest route, ­Amazon’s Alexa “understanding” the question “What time is it?,” and your iPhone “recognizing” your face as the ultimate password. 

On the cutting edge of AI are applications such as Waymo’s self-driving taxis, now in operation in Phoenix. Waymo’s onboard computer system orchestrates up-to-the-second data from ­twenty-nine cameras as well as radar and LIDAR sensors to make potentially life-and-death decisions, not to mention keeping the vehicle headed to its destination. In ­October, Apple announced that its new iPhone 12 can “look” at a scene through its onboard camera and describe what it “sees” in natural language—as in, “This is a room with a sofa and two chairs.”

Examples of unexpected, remarkable AI breakthroughs surface at least monthly. In December, the U.S. Air Force announced its first successful U-2 flight with an AI-based copilot, a development that has far-reaching implications for the future of air combat. In November, Google’s DeepMind AI project stunned the medical world with AlphaFold, an AI-based tool that provides much faster ways to predict folds in protein structures, a key element in vaccine research. 

The foundation for many if not all of these breakthroughs is a type of AI called “deep learning” or “neural networks,” which Geoffrey Hinton, a ­research ­scientist at Google, worked out in the mid-2000s. Enabled by extremely powerful computer processors and virtually unlimited cloud data storage, the “neural network” approach made it possible for AI to address real-world problems in affordable ways. “In reality,” writes Kai-Fu Lee, one of the world’s top AI experts and author of the ­best-seller AI ­Superpowers, “we are witnessing the application of one fundamental breakthrough—deep learning and related techniques—to many ­different problems.”

Lee highlights the distinction between today’s AI and something called “artificial general intelligence” (AGI), which is a radically advanced AI that can do anything a human can do, only better—­perhaps vastly better. Today’s AIs are almost all designed to do one thing well, whether it is identifying and picking a ripe strawberry or defeating the world championship team in Dota 2. Neither AI system is capable of doing anything other than what it was built to do. 

A super-powerful AGI, by contrast, like HAL in 2001: A Space Odyssey or Samaritan in the CBS TV series Person of Interest, in theory has the capacity to learn and act in ways that escape the bounds of today’s AI as it pursues its programmed goals and even protects itself. AGI is science fiction, for now. Experts are all over the map as to when AGI might become real—in a decade or in a hundred years—and what the arrival of AGI might mean: peak civilization or the end of humanity. 

Speculating about all that is endlessly interesting, but it distracts from a huge societal challenge already before us, which is to understand the dangers today’s AIs pose to society and to individuals. Because AI uses algorithms that are at once unseen and ubiquitous, it is quite different from the long list of ­technologies—from nuclear power to commercial flight to the modern automobile—whose dangerous downsides are there for all to see in the physical world. Not only are AI applications hard to discern in their execution and impact; they easily escape the legal and moral frameworks we apply to most dangers in our world. It’s one thing to require automobile manufacturers to install seat belts and airbags as a direct and clear means of saving lives; it’s another to determine the existential impact of AI systems that you cannot ­observe in action.

As is often the case in the Digital Age, public- and private-sector applications of AI are racing ahead of anyone’s ability to determine their consequences. The UC Berkeley computer scientist Stuart Russell, author of Human Compatible: Artificial Intelligence and the Problem of Control, says that the results of one such system, social media engagement algorithms, have produced a civilization-level AI catastrophe that nobody expected: the damaging polarization of society. The point of these algorithms is to maintain users’ attention, especially to promote “click-throughs.” This sounds like a customary situation in which a purveyor of goods adapts to the needs and tastes of the customer. But that’s not what really happens, Russell explains:

The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more ­predictable in which items they will click on. . . . Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.

In other words, instead of shaping the product to the customer’s needs, social media algorithms manipulate their subjects. As the process continues, small encouragements have large effects, altering tastes and interests in a slow and cunning brainwashing that reinforces “extreme” dispositions.

Russell warns, “The consequences include . . . the dissolution of the social contract.” If people can be so slyly altered, if a computer system can change their behavior without their even realizing it, the rational-choice assumptions that underlie the modern social order collapse. The very idea that we are self-aware, rational players participating in a democratic system comes into question, and with it the basic tenets of liberal political order.

So far, most of the political challenges to AI fall under the rubric “AI bias.” Many observers have pointed to a growing body of evidence showing that certain AI systems that make judgments about individuals exhibit the same biases harbored by prejudiced people. Examples have turned up repeatedly in recruiting software, predictive policing systems, education and financial services, facial recognition, and more. The problem often stems from an inadequate data set, for example, software that gives a low score to a job candidate whose attention wavers from the screen during an interview—a not unexpected behavior of a blind person or a person with any number of other disabilities.

The European Union has tackled this issue in its powerful 2016 General Data Protection Regulation (GDPR). The GDPR is mostly known for tough­ ­consumer data protections, but it also has some ­provisions to protect E.U. citizens from machine-driven decisions. GDPR’s Article 22 says that people can refuse decisions or observations AIs might make about them:

The data subject [a person] shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

In addition, GDPR establishes something like a “right to explanation,” which it describes as “meaningful information about the ­logic ­involved, as well as the significance and the ­envisaged ­consequences of such processing for the data ­subject.” In other words, if an AI determines that a person is not right for the job or a loan, the person can ­demand an explanation. It is probably too early to tell how these protections will play out in practice in the E.U., but it’s encouraging to see them in a landmark piece of legislation governing the digital world.

Of course, there would be strong resistance to that kind of regulation in the United States. Apologists will argue that better data will cure bias, while more “explainable” or “auditable” AI will remove the “black box” effect. (Notably, however, it’s very difficult to extract explanations of decisions from AIs that use deep learning ­technologies.) The apologists are at least partly correct, if only because the market requires ever more reliable AI, not approaches that leave customers open to civil rights violations and ­inexplicable outcomes. 

In fact, startups in Silicon Valley are already acutely aware that AI expert systems, such as ­financial trading software, medical scan readers, and even military drone weaponry, will not ­succeed if the real expert—the trader, doctor, or drone pilot—can’t demand explanations for what an ­AI-based system has recommended or decided. Some start-ups now specialize in extracting explanations from otherwise opaque systems. The topic is a critical one in academia and defense research as well.

The laissez-faire approach, however, will not create adequate controls on AI. After all, whatever Facebook built has been enabled by market forces, and no one really knows how to undo or arrest the resulting damage. Nor can government necessarily be trusted to take the right approach, not in the ­absence of clear legal frameworks. In China, the CCP has already used AI to establish an ­extraordinarily comprehensive surveillance state, and has in turn offered the technology to foreign regimes allied with China and eager to tighten their own political controls.

Once again, given the invisibility of these tools, it isn’t far-fetched to anticipate their slipping here and there into democratic societies. We should take steps right now, for instance, by drawing some legal lines around controversial surveillance tech such as Clearview, which is one of the leading facial recognition technologies used by police to identify perpetrators. Already in some U.S. jurisdictions, facial ID technology has been banned, and some technology companies, notably Microsoft and ­Amazon, have prohibited use of their own Clearview-like tools by police departments. Facial recognition is, however, only one of many hotly debated trade-offs between citizens’ privacy and more effective law enforcement. Not all such trade-offs involve AI, but many do, and their numbers will grow quickly in the ­coming years. 

In fact, AI bias and law enforcement issues may pale in comparison to the impact AI is likely to have on employment. There is broad agreement that many blue- and white-collar jobs will disappear thanks to AI-based systems in the years ahead. The only question is how many and what “new” jobs will arise to replace them. In AI ­Superpowers, Kai-Fu Lee estimates that AI will automate 40 to 50 percent of all U.S. jobs, from truck drivers to accountants, in the years ahead. Lee’s estimates are higher than other studies’, such as those conducted by PwC and MIT, but Lee says many studies fail to capture the speed with which AI is becoming more capable. Indeed, he believes that we’re headed for a social cataclysm in which meaningful work is scarce and economic disparities are even more heightened.

A counterargument to Lee’s dark projection recently hit the top of the Wall Street Journal’s best-seller list. Microsoft CTO Kevin Scott’s ­Reprogramming the American Dream takes a more hopeful tack and argues that with more investment in education and infrastructure, AI could be a boon to Americans, especially people living in depressed areas outside cities, a world Scott knows well from his upbringing in rural ­Virginia. It’s no coincidence that J. D. Vance wrote the book’s foreword. Scott’s can-do take on how AI might invigorate rural economies is peppered with examples of how small farms might flourish, for example, by applying cheap sensors, smart algorithms, data clouds, and “edge” computing to raise crops or manage herds more efficiently. The book is encouraging, to be sure, and Scott is an accomplished technologist whose role at Microsoft gives him remarkable ­access as well as insights into what is emerging. What is less clear is how any combination of market or public forces—and Scott admits it will take both—can bring Scott’s vision to life on a ­meaningful scale. 

The revolutionary implications of AI technology such as driverless cars are only a foreshadowing of more radical developments as AI evolves into AGI, which is the pursuit today of Google’s DeepMind and OpenAI. Elon Musk considers ­unregulated AI the “­biggest risk we face as a civilization,” and among those in his camp are the late Stephen Hawking and UC Berkeley’s Stuart ­Russell. It’s notable that sci-fi efforts to anticipate AGI are almost always dystopian: for­ example, the CBS series Person of Interest. In it, a tame AI trained to value all human life is pitted against an AI dubbed “­­Samaritan,” which has a broader remit to clean up society with relentless ­ferocity and with no concern for life, liberty, or due process, by working through witting and unwitting humans.

We’re not there yet, but it’s past time to start thinking about how to ensure there is never an AI as powerful as Samaritan in our world. More importantly, we must ensure that the AIs we coexist with today are well-understood and subject to human control, and their creators accountable. The place to start is with the AIs we already have on the loose in society. The past suggests a way forward.

The last century was notable for the rise of government agencies designed to protect citizens from emerging technologies through testing and ­regulation. Historically, alarmed electorates provoked ­legislatures into action. Upton Sinclair’s ­portrayal in The Jungle of unhygienic conditions in the Chicago stockyards led to the 1906 Pure Food and Drug Act, which in turn led to the creation of the FDA. Mid-air plane collisions in the 1950s spurred the creation of the FAA. Ralph Nader’s controversial book Unsafe at Any Speed and a National Academy of Sciences paper led to the NHTSA in the 1960s. These agencies and others have saved countless lives by establishing tests, ratings, and regulatory regimes that defined what “safe” meant and held players accountable.

Could a similar approach work to contain AI? The best answer is that we have to try something, and the regulatory agency framework is a proven one. Yet AI is far more difficult to define and contain than car and food safety. The potential ­dangerous effects range from mass unemployment and ­socially destabilizing media manipulation at one end of the spectrum to perpetuation of racial and other ­stereotyping in automated ­decision-making at the other. Addressing auto safety looks like child’s play by comparison. Yet containing AI will prove far more important than the great work those ­consumer protections agencies have accomplished. Our ­democracy and society depend upon it.

Professor Russell and colleagues argued the case in a 2019 Wired article:

We can expect to endure more societal disruption in the interim, as commercial and political incentives continue to lead the private sector away from the types of proactive protections we need. We will be nudged further toward extremes, and hopes for an open, fruitful, and diverse discourse in the digital town square will wither.

We have a brief window to keep the AI genie in a bottle of our choosing. If we fail to understand the threat and legislate accordingly, the day will come when AIs make choices for us, unaccountable to us, without our knowledge or consent.

Ned Desmond is senior operating partner at SOSV and former chief operating officer at TechCrunch.

Dear Reader,

You have a decision to make: double or nothing.

For this week only, a generous supporter has offered to fully match all new and increased donations to First Things up to $60,000.

In other words, your gift of $50 unlocks $100 for First Things, your gift of $100 unlocks $200, and so on, up to a total of $120,000. But if you don’t give, nothing.

So what will it be, dear reader: double, or nothing?

Make your year-end gift go twice as far for First Things by giving now.
GIVE NOW