In these early days of his pontificate, Pope Leo XIV has made one thing clear: The responsible use of AI will be one of his central themes. It has me thinking about landscaping.
Ten years ago, I lived with my wife and children in a two-bedroom house with a small yard. My job every weekend was to cut the grass and trim the bushes. Done right, it would take an hour. And though it wasn’t back-breaking work, I usually did it in thick humidity, and there was much sweating. Afterward I would take a shower, put on fresh clothes, and grab a cold beer, and then I would take the first sip while admiring the lawn, low and neat and striped. It would be hard to overstate how satisfying that moment was.
Ours wasn’t the finest yard on the block—there was a lot of crabgrass, and the lines weren’t flawless. But when all was said and done, I could stare at this small patch of manicured land and say, “You know what? I did that.”
Eventually we moved, and our family and our yard grew larger. I was needed for other things on Saturdays. So we outsourced the mowing. It wouldn’t be practical for me to keep doing it, my wife said, and I agreed. Today, I can still look out over the lawn on Saturday evenings with a beer in hand. And to be honest, the lawn looks better than when I was cutting it. But I can’t shake the thought that Saturdays are somehow thinner and smaller and less complete. Something has been lost.
In his 1981 encyclical Laborem Exercens, Pope John Paul II highlighted the two ends of human work: the objective and the subjective. The objective end, the object of work, is to make things that improve the world, like inventing a sewing machine or building a house or teaching double-entry accounting. When I mow the lawn, I produce something of value: a cleaner, more walkable, more aesthetically pleasing patch of land. Work is for others, for society.
The other end of work is its subjective value. As a person works, John Paul wrote, “these actions must all serve to realize his humanity, to fulfill the calling to be a person.” In other words, work is undertaken not just for the sake of the thing produced, but for the sake of the person producing it. The creation of something new doesn’t merely transform raw materials; it changes the person who produces it. When I mow the lawn, it moves something in me. It brings about a sense of learning or accomplishment or humility that makes me more human. Work is for the worker.
Ideally, societies are built and economies are run with both the objective and subjective ends in mind. In practice, the two are often at odds. New machines destroy jobs. They create jobs, too, but the old job, that thing that once existed, is destroyed. There are no more musket manufacturers.
Of course, human life has always been about disruption and its tradeoffs. You have a new sibling (good!), but now you get less undivided attention (sad!). It’s beautiful and sunny outside (good!), but now the beach is crowded (sad!). Your single topped the charts (good!), but now you can’t go to a restaurant in peace (sad!). We always hope that new technologies bring about real progress, that the good outweighs the bad. But that’s not always the case. Electric blankets kept us warm (good!), but they caused house fires and leukemia (bad!).
Our great task, when it comes to markets and the economy, is to weigh the true costs and benefits of things. We gain a more complete and nuanced view as we learn more. This is in the nature of negative externalities—things whose true cost is hidden or not immediately apparent. Dumping a factory’s garbage into the river may boost profit margins in the short term, but it exacts a terrible cost from society over the long term. The idea, then, is that over time people or governments recognize this hidden toll and amend it.
What is striking about the debate over artificial intelligence is how haphazardly we’ve weighed the negatives. The powers of AI are mind-blowing and immediately apparent. In twelve seconds, you can write a press release, code a website, or analyze the use of foreshadowing in Hamlet. Artificial intelligence clearly aids the objective ends of work. It mows a lawn much better than I can.
But as a society, we have overemphasized AI’s progress toward work’s objective goals and underemphasized what it does to work’s subjective ends. Pope Leo stressed this point at the Vatican’s recent AI conference, saying that any judgment of artificial intelligence “entails taking into account the well-being of the human person not only materially, but also intellectually and spiritually. . . . The benefits or risks of AI must be evaluated precisely according to this superior ethical criterion.”
This “superior ethical criterion,” the subjective end of work, is immediately evident to parents. When your daughter is dangling from the monkey bars, if your only concern were the objective end of the work—namely, getting her body from one end of the apparatus to the other—you would just carry her to the other end.
But what a stupid idea! We all know that getting across the monkey bars is worthwhile precisely because of the time and difficulty and failure—the inefficiencies, if you will—involved in accomplishing it. As it turns out, time and difficulty and failure are the only way to achieve the subjective end of work—which is also called character.
Great managers, great businesses, and great economies produce both objects of value and people of character. Artificial intelligence thus far has produced only the former. Consider a recent study by Microsoft and Carnegie Mellon that tracked 319 knowledge workers who used AI tools. It found two things: Generative AI both improves the efficiency of workers and makes them lazier thinkers. A similar MIT study found that prolonged use of ChatGPT produces an “accumulation of cognitive debt”—one of the more creative euphemisms for brain rot. Study after study confirms what many of us already knew: AI makes us both more efficient and worse versions of ourselves.
It’s easy to criticize AI for making us dumber. It’s harder to prescribe how to deal with it. What guidelines should we follow in determining how—and whether—we should use AI tools?
One answer is prudential judgment. When it comes to deliberations over whether to use a tool or not, it’s obvious that I should use a knife to cut vegetables and that I shouldn’t use a robot to read my kids’ bedtime stories. In the in-between cases, we have to make judgment calls.
If you need to decide how or whether to use an AI tool—in writing an essay, graphing a chart, analyzing survey data, creating a song, editing a video, writing a thank-you card, or deciding where to live—here are a few questions to aid your judgment call.
Does AI stimulate critical thinking or outsource it? If it generates time savings, what are you doing with the surplus time? If the primary gain is efficiency, how much have you learned in life from doing things inefficiently? Since you’ve begun using AI tools, have you become more fulfilled or less? If you were teaching your son to do this task, would you have him use the tool or not? What do you, the worker, see as the purpose of work? Does this tool help you fulfill that purpose? If you were presenting this work to God, how would he view the process by which you created it?
Henry David Thoreau wrote, “The cost of a thing is the amount of . . . life which is required to be exchanged for it.” The cost of AI must be assessed by a similar question: How much of my humanity must I exchange for the privilege of using this tool?