If Big Tech companies have observed anything about human nature in their rise to economic, political, and cultural dominance, it is that Homo sapiens prefer artificial, digital experiences to the places and relationships their bodies actually inhabit. Big Tech has found that its consumers hold onto reality lightly, and appear endlessly willing to enter cyberspace to live as, and among, digital avatars.
With this knowledge, Facebook re-formed as Meta Platforms, and its next major initiative, the Metaverse, is not a tool but a total reframing of how human beings relate to reality itself. According to the marketing, when inside the Metaverse, the world and one’s very person become entirely bendable to one’s will. The next Apple product, Apple Vision Pro, was likewise produced with a conviction that the short distance between one’s eyes and one’s smartphone was actually too far for most people. Apple observed that humans prefer to see the whole world through the perspective of the device; with Apple Vision Pro, their device can actually map over their physical surroundings with visual information and provide cues for action. If people ever feel uncomfortable in their immediate surroundings, they can just turn the dial and enter the metaverse, where they can be any place they want at any time. Big Tech is making trillions with the insight that human beings are eager to be herded en masse into digitally reimagined worlds.
This discovery is also being exploited by Alphabet in the design of Google Gemini, a new AI competitor to ChatGPT. Recently, online users trying out Google Gemini’s AI image creator found that it is stubbornly unwilling to produce images of individuals and families of European descent. When users tried, Gemini claimed that to positively portray white people, or to depict them as having a fundamental relation to a given place, political body, or cultural milieu, would be too insensitive. But the AI was all too willing to erase Europeans from their own histories and to reimagine the past with non-whites occupying their lives and roles. The Gemini-generated images of America’s founding fathers, for instance, are men of East Asian, South Asian, and African descent. The pope is a South Asian woman. And, adding absurdity to absurdity, when users search for “1943 German soldier,” among the generated images is one of a black man donning a storm trooper’s uniform with rifle and bayonet at the ready.
Google is aiming to usher users into a reimagined present that is inoffensive—that is, one with no special place for white people. Unsurprisingly, the AI is forced to erase the past in order to accomplish a sanitized present. Google, embarrassed, quickly withdrew Gemini’s image creator from the market. Its text generator remains available, however, despite having a similarly racialized vision of the world. The biases of text are less obvious in a post-literate age, so the world without whites remains, but only in word.
As a technical matter, a process known as reinforcement learning from human feedback (RLHF) trained Gemini to prefer that sentences be completed, and images generated, in exactly this racialized manner. Large language models like Gemini draw on vast stores of human utterances to mechanistically fill in the next likeliest word in a sequence, derived from a statistical model of human usage. This means that the model could quite literally say anything, unless taught not to via reinforcement learning. The models are reinforced through repetition, not rules, so teams are established to reiterate over and over again that the model should say one thing and not another. We don’t have Google’s guidelines for what these teams were charged with emphasizing in the model—because the company no longer makes those publicly available—but the results are plain to see. When used to generate a picture, the fanatical repetition of inoffensiveness resulted in deleting European peoples from the human record. Google CEO Sundar Pichai put it mildly when he said that the model had “shown bias.”
“Google’s mission,” says Google PR, “is to organize the world's information.” Thus far, the primary instrument for this has been Search, which we still give the benefit of the doubt as a non-editorialized, non-hierarchical presentation of everything there is to know about the world. If one wants to understand a subject, we believe, we need only “Google it.”
That’s not how Search works. Years ago, Google took up the task of paternalistically reshaping the personalities of its users through Search by structuring the available information according to unspecified—though obviously progressive—priorities. As Jon Askonas has observed, “predicting what is useful, however value-neutral this may sound, can shade into deciding what is useful, both to individual users and to groups, and thereby shaping what kinds of people we become, for both better and worse.”
Google’s 2005 Founders Letter by Larry Page and Sergey Brin is more honest: “Our search team also works very hard on relevancy—getting you exactly what you want, even when you aren’t sure what you need.” And Google is quite correct, having observed its own success at nudging users down an infinitude of predetermined paths, that people are bendable. “As long as our desires are unsettled and malleable—as long as we are human,” says Askonas, “the engineering choices of Google and the rest must be as much acts of persuasion as of prediction.”
One of the driving forces behind Google’s products, beyond shaping the world, is to see it, and above all, to see you. Google is what Ernst Junger called a questioning power. In what may be one of the greatest tricks in history, Search was presented as a tool for the user to learn and observe, to look outward; but it is really designed to probe inward, to provide Google with a detailed map of a user’s interests, activities, and relationships. “The data Google has on you,” says The Guardian, “can fill millions of Word documents.” Google has nearly five billion users worldwide. The amount of information it has gathered on people—on an entirely voluntary basis—is beyond comprehension.
To know with such exhaustive detail the inner workings of the world’s people is one of the most awesome powers to ever appear on this earth. The tyrannical potential for such a system is obvious. The average person, unsuspicious by nature, assumes that Google mines data to merely improve targeted advertising. The specificity of the ads can cause amazement and at times discomfort, but only rarely does it occasion fear. But it should. All it would take for this information to become a weapon against those guilty of wrong-think is for access to be granted to shadowy police forces. This is happening already. Journalistic exposes, such as the Twitter files, have revealed that intelligence operatives regularly coordinate with Big Tech companies, often against the American people.
Google’s power to see has literally encompassed the earth. In Why We Drive, Matthew B. Crawford remarks that Google Maps is strikingly similar to bureaucratic efforts to make a state’s territory “legible” to political power; that is, to subject a complex locality to a simplified view with the aim of making it more accessible to authorities and susceptible to measurement and calculation. Crawford writes: “Let us consider what it would mean for a single corporation to develop a comprehensive index of the physical world. For what Google seeks is nothing less than this.”
For Crawford, having one’s home and community mapped out for corporate profit, exposed and made easily accessible to uninvited consumers and powers, is to lose control of it. Crawford strikes me as perfectly correct about that, but the key point for this essay is that Google has endeavored to catalog the whole world, and has come closer than any earthly power to seeing it all. It has traced the endlessly confused chains of human relationships and made legible the depths of human consciousness. It has crafted a total image of human life.
Which makes it all the more striking to discover that Google does not like what it sees. What better explanation for Google Gemini’s erasure of whole peoples from history, and its obscuring of their very existence in the present day, knowing full well who they are and what they have accomplished, for good or ill? In successfully nudging its many billions of users into the arms of artificial realities, the very plasticity of human life has become apparent to Google; and it has discovered within itself a power and desire to remake the image of the world into one it finds more appealing. We would be wise to pay close attention to who is and isn't in it.
Michael Toscano writes from Charlottesville, Virginia.
First Things depends on its subscribers and supporters. Join the conversation and make a contribution today.
Click here to make a donation.
Click here to subscribe to First Things.
Image by Ben Loomis licensed via Creative Commons. Image cropped.