AI Doesn’t Know What It’s Doing

Artificial intelligence is an umbrella term covering ­many beliefs about the powers possessed by computers, both now and in the future. Because computers today perform many tasks formerly reserved to ­humans, many observers predict that they will soon replicate human intelligence and gain greater capabilities thereafter, finally rendering mankind “obsolete.” Neither what AI actually is, nor the paradigm of knowing it employs, is ever discussed by the AI believers—even though both questions are essential to understanding what AI can do and what its limitations are. As we shall see, the real dangers of AI are, ironically, byproducts of the hype swirling around it, which attributes to it capabilities and reliability that it will never have.

In common parlance, “artificial intelligence” denotes “computers that imitate people” or “computers that are just like human brains, only smarter.” There is no broadly accepted definition of AI, so let us formulate one:

AI is the category of systems that employ computers, feedback, rule-based logical inference (deterministic or statistical), complex data structures, and large databases to extract information and patterns from data and apply them to the control of equipment, assistance with decision-making, or the generation of responses to user queries involving text and images.

The kinds of technology that typically fall under the rubric of AI include: robots and robotic systems; neural networks and pattern recognition; generative AI, including ChatGPT and similar large language models; symbolic manipulation programs, such as Mathematica; autonomous cars and other autonomous systems; and complex large-scale control programs. 

There are four fundamental questions about AI: What is theoretically possible for AI? What is practically possible for AI, given current technology? What is economically efficient for AI, in terms of costs and benefits? Finally, what ethical boundaries are appropriate for AI? This article is concerned primarily with the first question, since it bounds the other three. First we look at reasons for the hype.

Extravagant claims about computers have a long history, dating to the introduction of the first commercial model, the ­Univac I, in 1951. During the 1950s, computers were called “electronic brains.” Computer pioneer Alan Turing (1912–1954) informed us seventy years ago:

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. . . . [Machines] would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.

In the film 2001: A Space Odyssey (1968), the intelligent computer HAL seizes control from the human astronauts. Similar takeovers have been projected in warfare and in white-collar professions such as legal advice, financial consulting, and teaching. Beyond that, we are told that computers will become “conscious,” will develop full human capabilities, and—who knows?—may have “souls.” Questions are mooted regarding the moral and legal “rights” of robots with AI. This is the viewpoint known as “artificial general intelligence”: Machines will have intelligence similar in kind to human intelligence, but superior to it. Ray Kurzweil has promoted the idea of a “singularity,” that is, a “merger between human intelligence and machine intelligence that is going to create something bigger than itself.”

The hype continues to escalate, particularly in the direction of threats posed by AI, threats that supposedly require immediate action to save humanity from catastrophe. Some argue that artificial intelligence could someday “destroy” America. Others aver that catastrophe is right around the corner. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, warns that the death of humanity “is the obvious thing that would happen.” Geoffrey Hinton, the “Godfather of AI,” recently assessed the likelihood of human extinction due to AI as 10 to 20 percent.

AI is feared for another reason, namely that it may be a “disruptive” technology—one that causes major changes to areas of business, industry, and commerce, thereby threatening the livelihoods and normal activities of most of the population. The automobile and the personal computer are prime examples of disruptive technologies. But for a technology to be disruptive, it needs to actually work—in this case, to interact with the world as humans do, only better.

It is unclear that AI will ever replicate human cognition. Indeed, it is reasonable to ask whether, in seventy years, we have moved any closer to Turing’s vision. If computer power has vastly increased, and computer size has shrunk dramatically, what has it all accomplished? The assumption is that such progress will lead eventually to qualitative changes in machine behavior. This is an empirically testable proposition. Comparison of a mainframe computer from Turing’s day with a modern smartphone shows improvement of six to thirteen orders of magnitude, but no evidence of sentience. Plainly, it is taking longer to “outstrip our feeble powers” than Turing envisioned. No one regards the smartphone as anything more than a handy multipurpose tool. Similar remarks can be made about quantum computers. The implication is that the scaling of computer power will not yield the outcomes postulated by Turing.

ChatGPT and other generative AI programs are popular but have established an unenviable track record. Let us consider some of their gaffes. Climate scientist Tony Heller asked ChatGPT a simple question about CO2 levels, corals, and shellfish. The answer it returned—beginning, “If atmospheric carbon dioxide levels were to increase by a factor of ten, it would have significant and potentially devastating impacts on corals and shellfish”—was completely wrong, a mindless echo of the climate alarmism that is everywhere on the internet. ChatGPT has been known to make up articles and bylines, a proclivity that has struck The Guardian, since phony articles have been attributed to it:

Huge amounts have been written about generative AI’s tendency to manufacture facts and events. But this specific wrinkle—the invention of sources—is particularly troubling for trusted news organizations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy.

Recent research has shown that the chatbots are getting worse at basic math, as shown by their inability to answer reliably such questions as whether a given number is prime. In another case, a lawyer used a chatbot to research and write a legal brief. Unfortunately for the lawyer, the brief contained numerous “bogus legal decisions” and made-up quotes, leading to potential sanctions for the lawyer.

Obviously, if citations are untrustworthy and entire articles can be made up, academic research, journalism, and everything in our society that depends upon reliable knowledge will be undermined. The New York Times has explored this problem, which strikes at the heart of any notion of intelligence. The Times asked ChatGPT a question: “When did The New York Times first report on ‘artificial intelligence’?” The chatbot’s answer invoked an article that the chatbot had simply made up. The “inaccuracies” that emerge from chatbots and other such programs are called “hallucinations” by those in the technology industry. Serious research cannot be grounded on “hallucinations.”

Large language models like ChatGPT are based on analysis of enormous amounts of data from various sources, usually the internet. The goal is to find patterns in the data, then construct text or images that conform to these patterns. Since the internet contains much erroneous data, incorrect inferences, and unbridled speculation, such an approach is highly problematic. The Times observes:

Because the internet is filled with untruthful information, the technology learns to repeat the same untruths. And sometimes the chatbots make things up. They produce new text, combining billions of patterns in unexpected ways. This means even if they learned solely from text that is accurate, they may still generate something that is not. . . . And if you ask the same question twice, they can generate different text.

Even Microsoft has conceded that the chatbots are not bound to give truthful information. According to an internal document quoted by the Times, AI is “built to be persuasive, not truthful,” with the consequence that “outputs can look very realistic but include statements that aren’t true.”

Chatbots are often (mis)used by students who ask them for term papers or similar assignments. An experienced teacher will easily perceive that the work is not the student’s, on the basis of style, sloppy reasoning, and bogus references. But the imposture places an additional burden on the teacher. Chatbots are the latest version of something that has been around for a long time: sophistry. As the Eleatic Stranger tells us in Plato’s Sophist:

Now, shouldn’t we presume that there is some other skill, involving words, by which one could beguile the young through their ears, with words, while they are still at a far remove from matters of truth, showing them verbal images of everything, so as to make the statements seem true, and the speaker seem the wisest of all men on every issue?

The chatbots show once again that it is easy to pretend to knowledge, harder to engage reality and truly know something about it.

A cautionary tale from the history of science is in order. In the early days of telescopes, the quality of optical glass was poor, lens grinding methods were crude, and little was understood about what we now call “physical optics.” As advancements came in all these areas, telescope performance naturally improved. At the time, no limit to how good telescopes could be, in terms of resolution and color correction, was foreseen. Early telescope makers did not understand the phenomenon of diffraction, which limits the performance of any optical system, no matter how perfect the lenses, mirrors, and other components. So, giddy as astronomers were about their better glass, improved lens grinding, and innovative multiple-lens objectives, they faced an ­unknown barrier that ultimately would thwart their plans. Long-range extrapolation of technology is likely to be met with disappointment.

Modern AI is based on ideas of human knowing that stem from the British empiricist tradition, in particular the philosophy of David Hume. Hume envisioned the body as a composite of discrete physical systems, with the senses sending their reports to the mind, which then worked on these reports. These “reports” he termed “impressions,” which gave rise to “ideas”:

I venture to affirm that the rule here holds without any exception, and that every simple idea has a simple impression, which resembles it; and every simple impression a correspondent idea.

Hume presents a theory of knowing in which ­senses deliver impressions, which we process as ideas. Once we have ideas, we can reason with them, either by means of logical inference, or directly as “matters of fact” (empirically ­grounded facts, including scientific laws). As for general ideas, they are nothing more than particular representations, connected to a certain general term. This theory quickly leads to nominalism, the belief that abstract entities do not exist and that any talk of entities such as “mankind” refers only to collections of individuals. Hume recognizes that we have such universal ideas in our minds, but they are mere labels, bearing no relation to reality. He rejects the longstanding opinion that there exist universals in themselves.

Hume was never able to explain how we arrive at forms of knowledge such as science, mathematics, and history. What “impression” gave rise to Einstein’s field equations for general relativity? Because every idea must be associated with a precedent impression resembling it, Hume could not explain how we can do something as simple as recognize a thing that is in a different position than when we first saw it—a problem that bedevils AI systems used in autonomous cars. Nor was he able to explain how it is possible to have knowledge of almost anything without recognizing abstract entities as real. For example, the statement “­Beethoven’s Fifth is a great symphony” uses abstract entities as both subject and predicate. Had there never been any performance of the notes Beethoven wrote, the statement would still be meaningful and true. And the term “great symphony” refers not to a collection of performances of music, but to a real characteristic of a certain type of music ­composition.

To demonstrate AI’s indebtedness to Hume’s theory of knowing, let us consider two implementations along with some of the problems that pertain to Hume’s theory and hence to AI.

In robotic systems, which include robots and self-driving cars, sensors send reports to a central processor, which employs algorithms to do calculations on the data and then instructs mechanical parts to carry out operations. Robotic systems accept Hume’s notion of the separability of sensing and knowing; they emulate it in accordance with the standard engineering practice of isolating system functions. The “ideas” are software structures that arise from “impressions” given by sensors. The system “reasons” by means of software manipulations applied to these “ideas.” The system is nominalist because it has no concept of abstract entities, only of the concrete objects in front of it.

Though generative AI is structured differently than AI applications such as robotics, it shares with them one key assumption, namely nominalism. Generative AI scans large collections of works that employ key words and phrases, takes the results, and assembles them on the basis of frequency into a report, following rules of grammar and knowledge of word-order frequencies, without knowledge of the abstract entities and ideas involved. In other words, it engages in a highly superficial form of reading. The problem—and it is a problem that vitiates the entire approach—is that most important texts cannot be read in this way.

Only in some cases, such as scientific and most historical writing, is the literal meaning of a text its principal meaning. For many works, especially works of literature and philosophy, the message or theme requires a holistic understanding of the text; it is not conveyed by any piece or excerpt that AI can scan. Often, indeed, the meaning of a work may depend on the reader’s imaginative reception of it, as is the case with poetry. And many texts have multiple levels of meaning, so that a literal reading may be true as far as it goes, while being less important than the symbolic reading. Or the real meaning of a text may be the exact opposite of its surface meaning, as in satirical writing. The purpose of much theological and ­poetic writing is to open a window onto a numinous world, and texts in disciplines such as philosophy may depend entirely on abstract ideas and entities. The reader of any of these kinds of texts must be able to perceive the reality behind the words—reading and understanding the entire text (including very abstract ideas and what they entail or imply), taking into consideration the writer’s goal, presuppositions, and biases, and then relating the work to others in order to ascertain its thoroughness, accuracy, and contribution value. Only thus can a thorough view of the subject emerge.

On this ground alone, it is plain that large language models will never replicate human knowing. AI can parrot what real minds have thought and said on these topics, and thus sound intelligent. What it cannot do is understand material.

Humans are distinguished by our ability to perceive reality. Those of us who consistently misperceive reality are regarded as incapacitated—that is, mentally ill. Our judicial system is based on the perception of reality, for it presumes real crimes committed by real people using real weapons, and the perception or detection of all these realities. Further, we humans believe that what we perceive directly may point to a reality that is beyond perception. Peter Kreeft’s famous argument is one example: “There is the music of Johann Sebastian Bach. Therefore there must be a God.” The ability of visual art to convey depth beyond the canvas is another.

The goal of human knowing is always to know something about reality, regardless of whether that knowledge has operational value. By contrast, neither an animal nor an AI seeks the reality of the real. AI must employ symbols, which have no meaning except that assigned to them by someone outside the computer system. The implication for the uniqueness of humans is straightforward. Those who would assimilate humans to computers use an argument with this compound premise: Humans are material only and Human functions can be reduced to algorithms. The conclusion is, Computers can duplicate human minds. But if computers cannot duplicate human minds, then it follows that either (or both) Humans are not material only or ­Human functions cannot be reduced to algorithms. These are fairly momentous points, and they suggest one reason why understanding what computers can and cannot do is important.

Human knowing operates on a principle that is radically different from AI’s Humean paradigm. Humans know by means of an integrated system of sensing, motor skills, and the brain. We have direct contact with reality, and we are able to know realities that exist beyond the realities we ­immediately perceive. This form of knowing is supremely ­creative. It encompasses the way in which we understand situations we have never encountered and generate new theories about reality. Humans can “think outside the box”; AI cannot. AI can, of course, generate “ideas,” understood in the rather limited sense of data structures or random chatbot statements. That is not how humans develop new theories or deal with unexpected situations. Our perception of reality is unlike anything that can be achieved by any paradigm based on separation of functions. AI algorithms cannot creatively and analytically think through a question, using information learned from reading and research, bringing to bear a critical eye for discerning what is valuable and a perception of reality for synthesizing new ideas. They can only ape human intelligence. The AI paradigm reacts to stimuli in the form of sense-type data or website texts; it cannot react, except very indirectly, to any underlying reality. It does not know what it is doing.

Moreover, AI systems are backward-looking rather than forward-looking, because they are based on existing knowledge. None has the ability to create new visions of reality, new theories. Of course they can be used to make predictions or forecasts about the future; even simple ­regression analysis can do that. And they can help us to see things that we otherwise could not see, such as simulations of the evolution of the universe. But these simulations are based on current theories—for instance, about the constitution of the universe and the laws governing it. AI cannot advance human knowledge in any theoretical sense.

AI will be expected to do things it will never be able to do. The result will be fruitless expenditures of money and time. Worse, AI-controlled systems may misbehave, leading to disaster.

In the recent past, computers took over manual-intensive tasks such as bookkeeping, account-balancing, and stock-transaction processing, displacing legions of clerks. No one today would transact with a bank that had dozens of people working adding machines in the back room. ­­­Newer computer systems will tackle more complicated but still well-defined tasks, which will likewise displace some types of workers. AI is suited to tasks that can be narrowly defined and implemented algorithmically. Tasks that require spontaneous decision-making in uncertain environments—in other words, most tasks—are very ill suited to it.

What good, then, is AI? AI is an evolutionary development, a continuation of the ongoing process of determining what is needed, then creating and improving software and systems that meet these needs. Today’s word processors are a great advance upon the crude, ­character-based word processors of the first PCs. Likewise, image editing and manipulation has come a long way, and the latest AI-based versions continue that trajectory. In the area of pattern ­recognition—important in medicine and ­other ­applications—AI will result in improvements. AI’s real value will be in specialized applications where it can enhance human capabilities and productivity. AI will never become conscious, replace humanity, or “take over.” It will displace some workers, though historically technology has created more jobs than it destroys. It will therefore impose burdens on ­society to ensure that those who are displaced are not abandoned.

Can AI be made “smarter” and overcome its current deficiencies? Unlikely. Recent reports tell us of new projects running behind schedule and over budget. “There may not be enough data in the world to make it smart enough,” observed the Wall Street Journal of a new project from OpenAI in December 2024. Large language models appear to be losing steam, and so far the “breakthroughs” of 2025 have turned out to be less than advertised. The Chinese AI engine DeepSeek roiled financial markets when it debuted its free chatbot in late January. But DeepSeek’s advantages amount to cheaper hardware and faster training; it does not represent a fundamental advance over OpenAI and other predecessors. Similarly, a faculty of “reasoning” has been attributed to new and emergent AIs. The term “reasoning” is rather vague, as any computer program performs some sort of “reasoning” to get results. In its AI usage, “reasoning” means committing fewer mistakes when dealing with real-world problems by doing some “fact-checking,” which in turn amounts to (for instance) taking more time to weigh different scenarios. This improvement does not alter the basic capabilities and limitations of LLMs.

AI researchers are trying to square the circle: If you begin with bad or inconsistent data, even a vast amount, statistically you cannot make it converge to the correct answer in a reliable manner. This performance limit arises because the AI paradigm of knowing is different from—and inferior to—the human paradigm.

AI is a response to the complexification of our modern industrial and information society. The interconnectedness, specialization, and drive for efficiency combine to push society to ever greater integration and complexity. Joseph Tainter has pointed out that such complexification is a problem-solving mechanism. Primitive societies begin with very little organization, but when they fight, they learn that they must have a warrior class, a food production class, and so on. After a long period, all of the systems we take for granted today become necessary. These systems, and the attendant training and specialization, improve society, but at great organizational and resource cost. Today, with our extremely complex societies, powerful tools such as computers are needed to manage and control all the systems.

Hype around the dangers of AI, though understandable, is misplaced. The real danger is not that some AI system will take over the world or go rogue like HAL. Rather, the real danger lurks in the complexification of society. As computer-based systems operate functions spanning ever more components of society, such as the power grid, the likelihood of malfunction naturally increases. The temptation will be to give AI systems more responsibility than they can handle, not from malicious intent, but through overconfidence. As systems employ ever more data and rely on ever more complex algorithms, their vulnerabilities—and thus the real dangers of AI—sooner or later become apparent.

The dangers of AI give rise to many plausible scenarios. An AI system might encounter a ­situation for which it was not programmed, and do something that leads to disaster. A system might be hacked by a malicious actor. The interaction of the components of a system might lead to ­unanticipated instabilities. An undiscovered programming bug might causes a system to malfunction, with potentially disastrous results. Perhaps most important, the societal cost of AI systems might exceed the value they add.

These vulnerabilities demonstrate that it will always be necessary for AI systems to have people involved in decision loops, since people can supply the connection to reality that the systems lack. It also implies that society may reach a point at which AI is more trouble than it is worth. It is now reported that AI data centers are straining power grids, distorting the quality of electricity in homes and thus endangering appliances and ­raising the threat of fires. Does the value that AI adds ­justify the enormous energy that it consumes? The ­answer is not clear. The question is barely under discussion.

AI will never replace the human mind. The Humean paradigm limits AI’s capabilities: The dreams of the early days of computers have not materialized, scaling has not brought qualitative changes, and though AI will solve problems and provide tools, it will not become conscious or even capable of many simple human tasks. AI’s limitation to algorithmic operations strongly suggests that human knowing is unique. Far from demonstrating that humans are mere material objects, AI demonstrates, precisely by its failure, that humans are not reducible to computing machinery.

The real threat from AI comes not from any possibility that it will become sentient and smarter than humans, but from complexification arising from the misuse of AI-based systems to control critical infrastructure: the power grid, military decisions, economic systems. Programming errors, encounters with unanticipated situations, or hackers can disrupt the AI system and cause ­serious malfunctions. Unless humans are kept in the loop, disaster is only a matter of time.

Image by Jason Allen. Image Cropped.

Next
YOU MIGHT ALSO LIKE

Immaterial World

Stephen M. Barr

Spencer Klavan is a classicist who holds a ­doctorate in ancient Greek literature from Oxford University. Not…

Are the Tech Bros Worse than Queer Theorists?

Carl R. Trueman

Last week, two signs of our times passed across my desk. First, a colleague drew my attention…

The Problem of Embryo Ownership

R. R. Reno

In this episode, Ericka Andersen joins Rusty Reno on The Editor’s Desk to talk about her recent…