Will AI evolve and supersede human intelligence, ushering in the tyranny of machines? Is the economy careering toward an unprecedented silicon replacement of human labor, leading to mass unemployment and social uproar? These are the common worries. But there’s another peril, one less dramatic, and therefore more likely: AI will lead to intellectual stagnation.
Writing in his regular column for the Wall Street Journal, Greg Ip makes a simple observation. “Large language models (LLMs) such as ChatGPT, Google Gemini and Anthropic’s Claude excel in locating, synthesizing and connecting knowledge. They don’t add to the stock of knowledge.” Put differently, LLMs are not curious. They lack the capacity for wonder. For this reason, the architecture of AI is not set up—cannot be set up—to discover new knowledge.
To use Iain McGilchrist’s terms, AI is pure left brain. It is calculative and reductive, treating the world as a giant data set. The results can be powerful, providing us with technological leverage over existing things. But the left brain is incapable of analogy, imagination, and emotionally colored thinking.
I recall a seminar more than a decade ago during which the computer scientist David Gelernter observed that true artificial intelligence would require a mood dial, one that could shift the silicon thinking from hyper-awake concentration to dreamy, half-asleep musing. The same dial would need to induce in AI the state of concentration on an irrelevant enterprise, a condition of the mind that often unlocks creative insight. James Watson reported that it was while he was playing tennis that the double helix popped into his mind, the crucial insight into the structure of DNA.
Ip reports the opinion of University of Toronto economist Joshua Gans, who thinks that AI can be an invaluable assistant, allowing scholars to focus on adventuresome research. To use McGilchrist’s terms once again, AI can serve as an emissary to its proper master, the right brain and its capacity for insight, creativity, and analogy.
As Ip recognizes, this outcome is unlikely. “Reliance on AI can cause critical thinking to atrophy, just as reliance on GPS weakens spatial memory.” Ip cites a recent study showing that cognitive function in humans declines in direct proportion to reliance on AI and internet search functions. Educators don’t need studies to know this fact. They see diminished attention spans, poor critical skills, and incuriosity in their classrooms daily.
I share Ip’s concerns. AI will certainly bring advances in some areas. But over the long term, the most likely outcome will be scientific stagnation. We may see a refinement of current knowledge and accelerated exploitation of its technological potential, but there will be less and less new knowledge.
AI will also accelerate the dumbing down of culture. Two years ago, Ben Lerner published an essay in Harpers, “The Hofmann Wobble.” He details the way in which he secured status as an editor of crowd-sourced Wikipedia pages and then used his credentials to introduce deliberately slanted information into the trusted source of supposedly objective information. LLMs invite similar efforts at a much larger scale.
For example, here’s the first sentence of Google Gemini’s report on Matthew Shepard: “Matthew Shepard was a gay college student who was brutally beaten and left to die in a hate-motivated murder in Laramie, Wyoming, in 1998.” Careful investigation has shown that his murder was not “hate motivated.” But that’s exactly how pro-gay propaganda portrayed Shepard’s murder for years after his death. In view of the sheer volume of text published by gay activists, their allies, and those duped by the propaganda, the LLMs will invariably scoop up and regurgitate this widespread but false narrative.
I’m not the only person to notice this phenomenon. I guarantee that well-funded movements (and foreign governments) will supercharge their efforts to flood the web with ideologically motivated material in order to shape the results of LLMs, which will report their preferred “truth” as settled fact.
We should worry about ideological capture, but the more profound danger will be the pollution of data as massive quantities of AI-generated text designed to influence AI results flood the internet. The upshot will be a degradation of cultural knowledge that will make us nostalgic for the far less damaging dumbing-down caused by TV and social media.
I do not wish to be understood as denying the transformative potential of AI. It will have significant effects. My point is this: Change is not always progress. More precisely, technological progress is not necessarily linked to an increase in scientific knowledge or cultural sophistication. Historians may look back and define modernity as the era in which all three were woven together. It is not written in the stars that this should remain always so. The strands can come apart. Perhaps they already have.
How Suburbia Reshaped American Catholic Life
Crabgrass Catholicism:How Suburbanization Transformed Faith and Politics in Postwar Americaby stephen m. koethuniversity of chicago press, 328…
What Is Leo XIV’s Educational Vision?
"The world is too much with us; late and soon, / Getting and spending, we lay waste…
Christian Ownership Maximalism
Christendom is gone. So, too, is much of the Western civilization that was built atop it. Christians…