Consider the obituary column in your local newspaper—not the obituary of anyone famous but just an ordinary obituary of an ordinary person from an ordinary place.
Consider it first as a surviving family member or friend, the one who has to gather the information for the obituary and select the appropriate facts. Many of us have had this experience and know how difficult and unsatisfying the process can be. The right balance between the competing demands of tenderness, respect, justice, loyalty, and candor is hard to find.
Consider, as well, what photograph of the deceased should be used to accompany the text. Rightly or wrongly, we grant enormous weight to the image in forming our enduring sense of the person. What image, in what setting and at what age, should be provided? Should the photograph be formal and artful, soberly and self-consciously posed? Or casual and spontaneous, a blithe and impromptu slice of life? Should it record youth, or maturity, or old age?
The sophisticated response to this string of questions is to assert that there is no such thing as a human totality or essence, and leave it at that. There is no unitary self, only—as Emerson once put it—a series of surfaces upon which we skate. But such a glib and weightless jibe, while it may be good enough for the seminar room, is no help. Obituaries must still be written, and photographs provided, because one’s respects must be paid, and there are better and worse ways of going about it. The roughness of the judgments that life forces upon us does not excuse us from responsibility for making them.
Life rarely resembles a Hollywood movie, in which the hero goes out in a blaze of glory, leaving a trail of smiles, admiration, harmony, and contentment in his wake. The actual end of most human lives is sad, painful, sometimes grueling, profoundly embarrassing, and pathetic, often leaving emptiness, loss, regret, relief, and other contradictory and disturbing emotions in its wake. And so one would never pick a death-bed photograph to accompany an obituary. (Indeed, it probably reveals something important about our age’s sensibilities that we have come to find unthinkably ghoulish and unpleasant the once-venerable idea of creating a death mask as a memento of the departed.)
But why would one then feel bound to use an old-age photograph instead? Why not make the photograph stand as an idealized representation of the departed, not in a state of decline or debility, but at the pinnacle of physical strength and beauty—even if the face being rendered thereby is a face from thirty or forty years ago, a face much more attractive and cheerful, but one that almost no one still living would recognize?
We can see the logic in that choice and yet ask: What is really being memorialized by such a photograph? Doesn’t such a choice imply that the aged are merely living on the downhill slope of life and that our essential nature is actually realized in youth or early adulthood, and that nothing worth representing or acknowledging is gained by subsequent experience—as though there were some brief and shining moment, at around the age of thirty-five, that one was at one’s peak, was fully oneself, even if it hardly seemed thus at the time? One sometimes notices that women who die in their seventies and eighties are nevertheless represented in their obituaries by pictures taken in their twenties or early thirties. Is this merely an example of all-too-human vanity at work? Or is there an implicit judgment about when a woman is most fully herself? When we judge that a man is “past his prime,” does that statement carry a larger significance about the way we understand the relative dignity of life’s passages?
A picture may be worth a thousand words—but there are many things a picture can simply never do. No single moment is ever going to capture the essence of a human being. No one can translate four dimensions into two—or imagine what individuals look like sub specie aeternitatis, as they must appear in the eyes of God. Great portrait painting may gesture toward the larger frame within which an individual life moves and has its being. But plain obituary photographs claim no such ingenuity. Indeed, what makes them so poignant is precisely their artlessness. It is as if their subjects make a silent plea to us, a modern restatement of Thomas Gray’s Elegy in a Country Churchyard, asking us to keep faith with them by remembering the unrendered reality of their lives, filling in the silences, completing their stories in our hearts.
An obituary always falls short of what we would like it to be—and yet, it is better than nothing. When we write one, we believe we are remembering something discrete and doing something more than recording a list of items on a curriculum vitae. We are, in some sense, striving for a God’s-eye view: not just people at particular moments in their lives, but people in their distilled essence, both as they were known and as they knew, or tried to know, themselves. We are attempting to represent a soul, something whose nature is greater and deeper than any particular instance can adequately show.
Our age, of course, prefers to speak of selves. “Souls” seems a term too laden with metaphysical implications to pass through customs. But it is striking to note how poorly the word “self,” even though it is one of the cardinal terms of our discourse, serves us as a marker for that thread of essential continuity in the individual life that we acknowledge and commemorate in the obituary. An obituary is not, or not only, about a self. The self is too changeable and contingent and interior a thing for that, and too tied to a romantic view of the isolated and autonomous individual, to tell us adequately about the individual. The self is a movable and malleable target, one that adapts to changing circumstances, revising its constitution repeatedly over the course of an individual life, taking on strikingly different colorations at different times.
And it is, in some fundamental way, unreachable. Indeed, the self can even be thought of as something that doesn’t match our lives exactly, coming into being after birth, as in the psychological development of a very young child, and ceasing to exist before death, as in cases of severe dementia or mental impairment. Yet even when a sense of self seems to have departed entirely from an individual we know—and this disappearance itself is often hard to ascertain, since the self is so irreducibly a subjective and interior phenomenon, and is so remarkably protean and resilient—there is something else that remains. What is one to call it?
That something else, I would contend, is better described by the term “person.” It is the person, not merely the self, that we attempt to capture in the obituary. It is the person, not the self, that is not only the home address of our consciousness, but the nexus of our social relations, the chief object of our society’s legal protections, the bearer of its political rights, and the communicant in its religious life. To put it another way, it is the person, not the self, whose nature is inextricably bound up in the web of obligations and duties that characterize our actual lives in history, in human society—child, parent, sibling, spouse, associate, friend, and citizen—the positions in which we find ourselves functioning both as agents and acted-upon.
The concept of the self, so steeped for us in romantic individualism, once seemed the most stable thing of all, the resting place of the Cartesian cogito and the seat of conscience. The young Emerson could still believe that introspection was the royal road to the universe’s secrets, so that the commands to “know thyself” and “study nature” were different ways of saying the same thing. But in the long years since Emerson, the link between self and nature has broken down, leaving us to soldier on alone in exploring the dim and misty marshlands of ungrounded subjectivity. The self has proven a highly unstable concept, having a tendency to dissolve on closer examination into a kaleidoscopic whirl of unrelated colors and moods, an ensemble of social roles, a play of lights undirected by any integrative force standing behind them all.
The concept of person, however, extending all the way back to its Latin roots (persona), accepts the social nature of the human individual, and the necessity of social recognition, without ever regarding the individual as reducible to these things. In a word, it stands nearer to the facts of social existence. It is a less vivid but more fundamental concept. A self is what I experience. A person is what I am.
The person has not fared especially well at the hands of modern attempts to write about history, which have generally sought to locate historical explanations in the workings of large structures, impersonal forces, and social groups rather than the vagaries and razor-edged contingencies of individual character and agency.
Some of this has to do with the enduring quest to make history resemble a science, a vision that took hold in earnest in the nineteenth century and has never entirely lost its appeal among academic historians. And as Alexis de Tocqueville observed, some of this has to do with the profound intellectual changes characteristic of a democratizing world, changes that alter both the subject matter of history and the historian’s manner of approach to it. Historians who write in an aristocratic age tend, Tocqueville believed, “to refer all occurrences to the particular will and character of certain individuals.” But those historians who write in a large modern democracy tend to make “general facts serve to explain more things,” so that “fewer things are then assignable to individual influences.” Ironically, the same democratic age that exalted the value of the individual also rendered that individual a prisoner of aggregate forces, with little or no power to control or affect the events of his day.
Tocqueville’s comments seem especially germane in the present context, for he was very concerned that the democratic historian’s emphasis on “general facts,” while not entirely wrong, was prone to exaggeration, and therefore could become dangerously misleading. Although the change in historical circumstances made necessary a change in the manner of historical analysis, the extent of that change could easily be overstated. The historians who devoted all their attention to general causes were, he thought, wrong to deny the special influence of individuals, merely because such influence was more difficult to track than had been the case in earlier times.
He saw this not only as an explanatory problem but as a psychological and spiritual one. By promoting such distortions, these writers were creating false images of the human situation in both past and present, which could have the disastrous effect of depriving “the people themselves of the power of modifying their condition,” thereby encouraging a kind of fatalism and paralysis of will, and a steady contraction of the human prospect. While the historians of antiquity “taught how to command,” those in our own time, he complained, “teach only how to obey.” They produce works in which “the author often appears great, but humanity is always diminutive.” The belief that individual human agency should be factored out of history was, he feared, a self-fulfilling dictum, since the factored-out would come to believe it themselves.
While this generalizing tendency Tocqueville described was gathering strength, the main currents of modern Western thought in other disciplines, such as philosophy and literary studies, were moving in the opposite direction, toward an ever-tighter embrace of antinomianism and radical subjectivity. Yet the paradox was only a seeming one, for the two opposites went together. What, after all, could be more logical, in a sense, than the impulse to resist the coercive, domineering force of the increasingly organized and mechanized external world of the nineteenth century—ordered and disciplined and measured on every side by clocks, factories, whistles, telephones, maps, telegraphs, railroads, state bureaucracies, large business corporations, and Standard Time Zones—by withdrawing into a zone of inner freedom, by cultivating and furnishing a large and luxuriant interior realm, insulated from the world’s tightening grip? It is no paradox that we use the term “modern” to refer both to the external material and social forces that transformed the world, and to the internal intellectual and expressive movements that wrestled with, and often deplored, the human costs of that same transformation. They were two facets of the same phenomenon.
And yet, the seeming gulf of this opposition helps explain much about the ways in which the dualism of “individualism” and “collectivism” came to be understood for much of the twentieth century. It helps explain why so much of the energy of American social and cultural criticism, past and present, has been devoted to the sustained critique of nearly any perceived force of political power or social conformism—power elites, corporate magnates, hidden persuaders, would-be traditions, and other suspect cultural hegemonies—that might inhibit the full expression of the self as an autonomous or relatively unconditioned historical actor.
Our age has lost none of its appetite for fables of personal liberation, and it tends to side with the rebels Roger Williams and Anne Hutchinson, or with the precepts of Emersonian self-reliance, or with the antinomian moral fables offered repeatedly by movies. We are asked to side with the put-upon individual, cast as an unjustly thwarted soul yearning to breathe free, and we are instructed to hiss at the figures of social or political authority, whose efforts to maintain order establish them as monsters and enemies of humanity.
Yet autonomy, like the self that purports to exercise it, turns out to be elusive and unreliable, never quite delivering what it promises. Even the most energetically “unencumbered self” is always already a “person” enmeshed in social, intellectual, and institutional frameworks that structure and enable the ideal of autonomy. But the liberatory preoccupations of so much modern scholarship and thought have made it difficult to move beyond the amorphous and untenable concept of self to the sturdier and more tenable concept of person. Indeed, one could argue, following the historian Christopher Shannon, that the agenda of modern cultural criticism, relentlessly intent as it has been upon “the destabilization of received social meanings,” has served only to further the social trends it deplores, including the reduction of an ever-widening range of human activities and relations to the status of commodities and instruments, rather than ends in themselves.
One would be a bold prophet indeed to predict that this tendency will exhaust itself anytime soon. But things will start to change when we insist upon seeing the human person as the focal point of historical inquiry, the cynosure of historical meaning, the fleetingly visible figure to be sought in history’s lavish carpet. The study of history is arid and incomplete unless it is understood as a work about (and by) individual human beings—and, moreover, a story whose substance and manner of telling are matters of moral significance. A shift from “self” to “person” can strengthen that endeavor. It can rescue the individual from being smothered by giant structural explanations, the prospect that Tocqueville feared. But it also can rescue the individual from being let loose into a whirling centrifuge of subjectivism and indeterminacy, a prospect even more inimical to historical understanding.
The recovery of the idea of the human person will necessarily entail a reappropriation of neglected religious and other longstanding moral and spiritual normative traditions. As Charles Taylor has argued in his magisterial study Sources of the Self, the coherence and integrity of the human person rests upon a moral foundation, on a set of presuppositions about the structure and teleology of the moral universe. A moral disposition toward one’s world, and a prior assent to certain moral criteria, are the preconditions of there being any psychological order and consistency at all in a human personality. Health is built upon morality, and not vice versa. The concept of moral responsibility, which therapy would seek to banish or marginalize, turns out to be essential and inescapable. There is no value-neutral way of being happy and whole.
Martin Buber’s 1938 essay “What is Man?” is still remarkably fresh and clarifying, stressing the “dialogic” and relational qualities for which Buber had become famous with his great 1936 book I and Thou. For Buber, the human person was reducible neither to the discrete features of individualism nor the collective ones of social aggregates, let alone the vagaries of language and discourse. The study of man, Buber asserted instead, must start with the consideration of “man with man.” If you begin there, “you see human life, dynamic, twofold, the giver and the receiver, he who does and he who endures, the attacking force and the defending force, the nature which investigates and the nature which supplies information, the request begged and granted—and always both together, completing one another in mutual contribution, together showing forth man.”
Or, as the French Neo-Thomist Jacques Maritain put it nearly a decade later, “There is nothing more illusory than to pose the problem of the person and the common good in terms of opposition,” for in reality, it is “in the nature of things that man, as part of society, should be ordained to the common good.” The problem of the person “is posed in terms of reciprocal subordination and mutual implication.” Here one finds no concession either to the romance of the heroic atomic self or to the historiography of vast impersonal forces, or to the putative opposition between them.
One might go further and point out that the concept of “person” helps us understand human dignity as something deriving from the fact of one’s intrinsic being—rather than from the extent of freestanding autonomy, the “quality of life,” that a person might demonstrate. Such a view would stand in the longer Western tradition of individualism, affirming the diversity of legitimate human roles and ranks in society as we find it. At the same time, it would be in direct competition to the increasingly influential view that the dignity of any individual life is dependent upon the competency of the individual, as though a self with a poor quality of life has a life not worth living.
One thing seems clear, however. We need to rescue the idea of individual dignity from its captivity in individual psychology and postmodernist subjectivity. And this is what the word “person” begins to accomplish. It reaffirms the core meaning of individualism with its insistence upon the ultimate value of the individual human being. But it also embraces the core insight of communitarianism: the recognition that the self is made in relationship and culture, and the richest forms of individuality cannot be achieved without the sustained company of others. And it would build upon Tocqueville’s further insight that it is in the school of public life, and in the embrace and exercise of the title of “citizen,” that the selves of men and women become most meaningfully equal, individuated, and free—not in those fleeting, and often illusory, moments when they escape the constraints of society, and retreat into a zone of privacy, subjectivity, and endlessly reconstructed narratives of the “self.”
Henry James’ celebrated story “The Figure in the Carpet,” a characteristically multilayered and inscrutable tale, relates the quest of an earnest young literary critic for the hidden meaning, “the undiscovered, not to say undiscoverable, secret,” animating the voluminous work of an eminent novelist named Hugh Vereker. What was sought by all critics, but grasped by none, was “the general intention” behind all of Vereker’s books, their unifying meaning. The critics and Vereker’s other readers, though consistently mystified, were yet certain that “the thing we were all so blank about was vividly there. It was something . . . in the primal plan, something like a complex figure in a Persian carpet.”
The story is a masterpiece of ambiguity, impossible to reduce to a single stable interpretation. Indeed, it can even be read as a mockery of the whole literary enterprise, pairing dull and uncomprehending readers who ploddingly manage to miss the obvious, with clever authors (both the fictional Vereker and the actual James) who feel compelled to play the trickster, taunting their readers with the hint that there is something—indeed, the whole point of it all—that they don’t get.
Predictably, many recent critics, brilliantly managing to combine indeterminacy with reductionism, think the story is all about homosexuality and the epistemology of the closet. There are even Oz-like moments when one suspects that all the elaborate concealments exist to conceal precisely nothing, and the story is an exercise in mockery, put forward by an author who himself felt perpetually misunderstood and undervalued. Yet such a bitter and nihilistic interpretation is very hard to sustain in the end, and one is left instead with the sense that the search for a unifying idea—for the figure in the carpet—is a quest too compelling to refuse. The name “Vereker,” after all, suggests some relationship to the truth.
And so the story teaches us something about how to look for such things—if not necessarily to know when we have found them—and to know what kinds of secrets are worth pursuing. The great literary critic Frank Kermode wrote of “The Figure in the Carpet” that “Vereker’s secret—‘the thing for the critic to find’—is not, we infer, the sort of thing the celibate and impotent may look for when they speculate about sex. It is a triumph of patience, a quality pervading the life of the subject, like marriage. It is not the subject but the treatment, which is why it is a suffusing presence in all Vereker’s work, and not a nugget hidden here or there. It is a matter of life and death and a matter of jokes and games. The error of criticism is a ludicrous one; it is also tragic.”
We should avoid such errors in searching for the figure, for the human person. The suffusing presence will not be disclosed in a single fact-nugget, or by a dark secret, pulled from a personal diary or a police file or a divorce testimony or how-to manual. Instead, it is the sort of complex secret that reposes in plain view, an abiding condition that can only be seen, if at all, by standing still and looking, until the pattern emerges and makes meaningful the life of the subject.
It may be as hard to detect as the atmosphere. Or the enveloping climate of marriage. Or the light that rings the ordinary images of ordinary faces on the local paper’s obituary page—or the shadowy depths beneath their smiles. Seeing it may well be a gift of grace. Yet such are the lights and shadows, and figures in the carpet, for which we should search.
Wilfred M. McClay holds the SunTrust Chair of Humanities at the University of Tennessee at Chattanooga.
Image by Sailko licensed via Creative Commons. Image Cropped.
You have a decision to make: double or nothing.
For this week only, a generous supporter has offered to fully match all new and increased donations to First Things up to $60,000.
In other words, your gift of $50 unlocks $100 for First Things, your gift of $100 unlocks $200, and so on, up to a total of $120,000. But if you don’t give, nothing.
So what will it be, dear reader: double, or nothing?
Make your year-end gift go twice as far for First Things by giving now.