A richly authentic epic adventure of rough-hewn men and courageous women, set in the hard country of the American Southwest frontier.
Hard Country is a rare and extraordinary story of one family’s struggle to settle and endure in the vast, untamed territory of New Mexico.
In the wake of the death of his wife as she gives birth to his son, and the killing of his brother on the West Texas plains, John Kerney is forced to give up his ranch, leave his son behind, and strike out in search of the murderous outlaws and a place where he can start over. He drifts south until he meets a man who offers him work trailing cattle to the New Mexico Territory and forever changes his life.
Spanning the years of 1875 to 1918, Hard Country is the Western reinvented and enlarged into a saga that above all celebrates the people and the land of the great Southwest.
The Trust Game
From Short Cons to the Wealth of Nations
The scene of the crime was an ARCO station, in a sketchy neighborhood on the outskirts of Santa Barbara, where I had an after-school job pumping gas.
One day I was standing in the doorway to the office, feeling the breeze and waiting for the next customer to pull up to the pump, when a well-dressed but slightly worried-looking guy walked around from the side of the building.
“Maybe you can help me,” he said. “I’ve got a job interview up in Goleta and I don’t know what to do.”
“What’s up?” I said.
“Well, look . . .” He held out a small gift box from a fancy jewelry store in town. Then he opened the box, and inside was a pearl necklace, glimmering in the California sun.
“I just used your men’s room and I found this on the floor. Amazing, huh? Has anybody called?”
“Man, that’s a nice piece of jewelry. Somebody’s really going to be upset they lost this. What do you think we should do? I can’t just keep it.”
We both stood there for a moment, studying the pearls, which to my eighteen-year-old eyes looked very expensive indeed.
Then, as if on cue, the phone rang. I reached over to the desk and answered, and a man on the other end said, “I was just at your station. I had this necklace I bought for my wife and I think maybe it fell out while—”
“Hey!” I said. “I can’t believe it . . . the guy’s right here. He just found it in the men’s room.”
“That’s incredible,” the man on the phone said. “Look. Tell him to stay put and hang on to it. I can be there in half an hour.”
“Let me give you a phone number,” he said. Which he proceeded to do. “And listen . . . tell him I’m bringing $200 for his troubles. He really saved my life. Or at least my marriage!”
I put down the phone and excitedly explained to my new friend that the owner would be here in half an hour with a $200 reward. But the guy in the station with me didn’t appear too excited.
“Oh, man . . . it’s not like I can wait. I gotta be in Goleta by then, and I really need to land this job.” Then he looked at me and asked again, “What should we do?”
I thought about it for a moment, and he watched me think.
“I’ll be here till closing,” I said. “I guess I can just hold on to it till he comes.”
“Would you?” He smiled brightly, then heaved a big sigh. “Man, that’d be great. So then we should split the reward.”
“Really?” I said, expressing amazement, even as the wheels in my head were already churning up ways to dispose of that cash.
But then he bit his lip, seemingly troubled once again.
“Only problem is . . . I’m not coming back this way.”
“That’s okay,” I said. “We can divvy it up in advance. Here . . . I can give you your half right now.”
Which is what I did, actually “borrowing” $100 from the gas station’s cash register, and handing it over to this guy I’d known for all of five minutes.
As I’m sure you figured out long before this point, the “pearl” necklace was paste, a cheap string of beads in an expensive-looking box, and of course the guy on the phone was in cahoots with the guy who showed up at the station.
So how could anyone be so dumb as to go along with this scam, forking over what to me was real money on the basis of such a lame story and cheesy coincidence?
Was I simply overwhelmed by greed?
Well, no doubt about it, I had dollar signs in my eyes as I looked at the jewelry and heard the magic word reward. But I was a reason
It also wasn’t as if I’d never been schooled in right and wrong. You think your parents were strict? Mine took me out of Catholic school because it wasn’t strict enough!. And although it sounds more like a punch line than the truth, before my mother was my mother, she was a nun. She had spent four years as a member of the Sisters of Loretto at the Foot of the Cross, and my upbringing, complete with Latin mass, years of breathing incense as an altar boy, and white-glove inspections of my room for dust, left no doubt that we are all born in sin and driven by base passions that have to be tightly constrained and relentlessly monitored to keep us from behaving badly. My mom’s view was the classic approach to governing human nature, the top-down approach filled with “thou shalts” and “thou shalt nots” that’s held sway throughout Western history. She based her child-rearing on the assumption that unselfish, moral behavior was impossible without the ever-present threat of punishment, and the more terrifying the better. So cue those images of hell from Hieronymus Bosch.
But when I think back to the incident at the ARCO station, it’s not greed that I remember, or any of the other deadly sins that the philosophers and theologians (and my mother) worried so much about. I think I was motivated by a genuine desire to be of assistance. This poor guy had an important interview, and he looked flustered, down on his luck, almost desperate. With the first words out of his mouth he asked for my help, and he really looked like he needed it. But more than that, in everything he said and did, he appeared to put an amazing amount of trust in me, relying on a high school kid to get the necklace back to its rightful owner. Several times he asked me, “What should we do?” And then he left me in charge of doing it. After a show of faith like that—helping him just felt like the right thing to do.
When I went on to college, I majored in mathematical biology and economics, but questions about how we know the right thing to do stayed with me. I read a lot of moral philosophy and even theology along the way, and then after grad school, the math, the biology, the economics, and the moral concerns all came together in my early work connecting trustworthiness to prosperity.
So now let’s flash-forward to November 2001.
I’m up at two in the morning lugging equipment across town and into a lab I’ve borrowed at UCLA by convincing a UCLA post-doc named Rob Kurzban to collaborate with me. I’ve commandeered a couple of graduate students to serve as Sherpas, as well as to be official passengers so I can qualify for the carpool lane on the freeway. I’m a tenured professor of economics at Claremont Graduate University, but I’m starting a very atypical research program, stretching the boundaries of my field, which means I’m now having to do science the way indie filmmakers make movies—borrowing space, begging for funding, and hauling equipment around Los Angeles in my car. We’ve made maybe four trips back and forth between Claremont and Westwood today, and it’s at least an hour and a half each way.
I didn’t know it yet, but I was about to invent a new field called neuroeconomics, and I was going to do it by running the first vampire version of something called the Trust Game.
How the Trust Game Works
The Trust Game is a classic research tool in experimental economics, and we’re going to spend quite a bit of time with it, so here’s how it works. Let’s say you’re an undergraduate, and you need some extra cash, so you agree to take part in what’s described as a study of monetary decisions. You come to a big room, like the one I’d borrowed at UCLA, along with maybe fifteen or sixteen other people you don’t know, and you sit down in a small cubicle with a computer. You read the online instructions, which confirm that, just for showing up, you now have $10 on account, which is yours to keep. But soon you may receive more. That’s because the computer is going to ask some other randomly chosen and anonymous player—let’s call him Fred—if he would like to transfer some or all of his $10 to another anonymous player, which happens to be you.
But why would he do that? Because, according to the rules that you and Fred both just spent a few minutes reading, any amount he gives you will triple in value the moment it hits your account. But increasing your wealth wouldn’t be entirely altruistic on Fred’s part. The rules also say that if he transfers money to you, you then will be asked if you want to give some of your multiplied-by-three bonus back to him. The question is, will you? Can you be trusted to reciprocate?
The beauty of this test is that there’s no social pressure to be on your best behavior because the computers mask who is doing what. Even the experimenters know the individuals only by code numbers. So Master of the Universe or Mother Teresa—the moral model you choose to follow in giving something (or nothing) back is entirely up to you. Even when you’re paid at the end, no one else will know how much you made unless you tell them.
Let’s say that Fred takes $2 from the initial $10 bankroll he received just for showing up, and he transfers it to you. His $2 transfer triples to $6 as soon as it hits your account, which means that you’ve now got $16 (10 + 6) and Fred is down to $8 (10 – 2). So you’re doing pretty well. You don’t know exactly who you have to thank, but you do know that you’ve picked up an additional $6 and that an anonymous benefactor at one of the other computer terminals in the room is responsible. You also know that your benefactor’s decision was based on an expectation that you would be decent about it and share at least some of the wealth. After all, it’s really no skin off your nose to flip back a couple of bucks. It seems only decent—like tipping the waitress who brings you your coffee. That’s just what decent people do, right?
Let’s say you decide to give $3 back to Fred. That leaves you with $13, and brings Fred up to $11—a go-ahead of $3 for you and $1 for him, which isn’t much, but still better than where you both started. Then again, you’re perfectly within your rights, if you so choose, to walk away with your original $10, plus the $6 bonus Fred made possible, without so much as a Thanks, chump.
As the amount being transferred increases, the potential pay
But here’s the $64,000 question: If you’re under no obligation to be trustworthy, and nobody knows whether you are or not, why would you ever reward trust from a stranger with a reciprocal gesture that takes real money out of your pocket? If no one’s ever going to know, what’s the problem with being a greedy bastard and screwing the other guy? Well, according to the economic theory that held sway over most of the twentieth century, that’s exactly what you should do.
Economists had fallen in love with a concept called “rational self-interest,” which assumes that each individual makes decisions on the basis of personal advantage, and also on the basis of a rational calculation as to exactly where that advantage lies. Economic theorists had been inspired by the ideas of theoretical physics, mostly in the area of thermodynamics, with its systems of inputs and outputs moving toward equilibrium. The beauty of rational self-interest as an organizing principle was that it allowed economists to vastly simplify the math in their models. If humans always make decisions (a) rationally and (b) on the basis of self-interest, then model builders don’t have to take into consideration emotions, personality quirks, or sudden flights of lunacy. Each person—or at least the theoretical person who lives inside the models—always sizes up her options and makes a logical choice based on what’s best for her.
A fellow named John Nash, the subject of Ron Howard’s film A Beautiful Mind, actually won the 1994 Nobel Prize in economics for his work refining rational self-interest into an even more elegant and hugely influential formula called the Nash Equilibrium. According to Nash’s theorem, your response in the Trust Game should be simply to keep whatever comes to you, even though you know some other person increased your wealth partly in the hope that you’d reciprocate. In the same fashion, the Nash Equilibrium says that this other person should have enough sense to expect self- interested behavior from you and not trust you with a dime. After all, you’ve never so much as said hello. Of course, the unintended consequence of such “rational” behavior—that is, looking out for number one—is for both of you to miss the opportunity to gain by creating a larger pie, then sharing it.
For more than a century, the idea that human behavior is fundamentally both rational and self-interested was presented as gospel to millions of students, including many of those who have gone on to run our most powerful businesses and government institutions. These are the people who often set the standards for behavior on Wall Street, in government, and in the boardrooms of global corporations. Yet with all deference to John Nash and his Nobel Prize, the Trust Game shows that rational self-interest is bupkis when it comes to real people.
In the United States the stakes in the game have been as high as $1,000, and in developing countries as high as three months’ average salary. With large sums or small, in dollars or dinars, participants almost always behave with more trust and trustworthiness than the established theories predict that they will. In my own experiments with the game, 90 percent of those in the A-position (the trusters, like Fred) send some money to the B-player (the recipients, like you), and about 95 percent of the B-players send some money back, based on . . . what? Gratitude? An innate sense of what’s right and wrong?
Or could the behavior possibly have something to do with a reproductive hormone with curious properties involving trust and reciprocal trustworthiness?
A Crackpot Notion?
One of my colleagues told me that this was “the stupidest idea in the world,” but to me it made perfect sense. At least it made enough sense that I wanted to check it out before I dismissed it as a crackpot notion.
Our human guinea pigs—the UCLA students who’d agreed to be tested in exchange for pizza money—began to drift in and take their seats around nine thirty in the morning. At ten o’clock I got up in front of them in my spiffy new lab coat to make a few opening remarks. I thanked them for agreeing to participate, and then I reminded them—we’d explained all this in a recruiting email—that they’d already earned $10 just for showing up.
I then gave a rough overview of what we were going to do—the same story about player A and player B that I related to you a couple of pages ago—but with an added feature. Just after the decision-making, we were going to strap tourniquets around the players’ arms and take their blood.
There was no visible reaction. They hardly seemed aware of me. They hardly seemed awake.
I told everyone to log into the computers in their booths using their identity-masking code, and to read the instructions. The protocol described in greater detail how their decisions could turn the $10 they’d already earned into more money, or how their decisions could cost them money.
Now I began to see some raised eyebrows and slightly more animated expressions. Everybody seemed to be waking up. It was as if they were thinking, So what is this? Who Wants to Be a Millionaire on a budget? Or maybe Who Wants to Be a Millionaire on a budget meets General Hospital?
I had to keep everyone occupied while we focused on each individual participant’s decision and blood draw, so I asked the larger group to start filling in a personality survey.
Then I started calling out the code numbers for various players, selected in random order. “Number Six, please make your decision. And as soon as you’re done, please raise your hand.”
The question at this point—a question to which we thought we knew the answer—was whether or not any given A-player would choose to transfer some or all of his money to a randomly designated and anonymous B-player. Would player A trust enough to give money, counting on player B to reciprocate by giving something back?
When one of my graduate students saw a hand go up, she would immediately escort the A-player, the decider, to the smaller room off to the side that we’d set up for the blood draws. It seemed unlikely that the kind of decision put before the A-player, which was a pretty cold calculation, would affect oxytocin, but we took their blood anyway because we didn’t know—no one had ever done this experiment before. What we did know was that any hormonal change in either player would be transient. Animal studies had shown that oxytocin surges in response to the right kind of stimulation, then fades after about three minutes. Which meant that the blood had to be drawn right away.
On hand to do the honors was an internist from Van Nuys named Bill Matzner. In mid-career Bill had decided to do graduate work with me, focused on health care economics. I talked him into vampire economics instead, and now he had been dragooned into being my blood tech.
As a medical doctor, Bill was invaluable to my improvised re
Another problem was that the centrifuges Bill had been nice enough to contribute were not the $7,000 refrigerated kind. Oxytocin not only fades fast in the body but also degrades rapidly at room temperature, so you have to grab it fast and keep it cold. Luckily, I’d been planning this new venture for a long time, and while roaming the campus at the end of the spring semester I’d stumbled upon some undergraduates packing all their stuff into their cars to go home for the summer. Without too much trouble I’d been able to talk them into donating their mini-fridges to the cause of science.
With our less-than-cutting-edge technology, we developed a protocol that involved spinning down the samples inside the cube refrigerators, transferring the separated blood products into microtubes, flash-freezing them to–100 Centigrade using dry ice, then storing everything in Bill’s ultracold freezer twenty minutes from UCLA until we had a sufficient number of samples for analysis.
Once all the A-participants had made their decisions and we’d taken their blood, we allowed the computer to release the results to the B-players. A few might have been stiffed, but based on the Trust Game’s history, we knew that most would have the pleasant surprise of a few extra bucks added to their bankroll.
Now it was time to see how many would be willing to split the difference and give back a portion of their newly acquired wealth.
“Number Nine, please make your decision. As soon as you’re done, please raise your hand.”
Once again, if being trusted by an A caused oxytocin in a B to spike, we had only a few minutes to capture the surge.
Participant 9 sat down and rolled up his sleeve; Bill applied the tourniquet. Then Bill jabbed the needle. Then 9 howled in pain. Bill jabbed again, and again our participant shrieked. I glanced into the main room where I could see all our test subjects turning back to look toward the sound. Apparently Bill could have used even more practice than all the sessions we’d put him through.
Another volunteer fainted, which put us on the horns of a dilemma. We didn’t know how many good samples we were going to get, and with each person we had to move fast before the faint trace of oxytocin returned to baseline.
We hovered over the poor guy, Bill with his syringe, a graduate student holding our unconscious test subject as he slumped in the blood-draw chair.
“What do you want to do?” Bill asked me.
I was desperate for data. “Let’s get his blood,” I said. “Then we’ll revive him.”
But even with orange juice and cookies we still couldn’t get him up and running again. I told the other participants that we’d had a glitch and that they should just surf the Web while they waited for us to resolve it. It took fifteen minutes, but we finally put our fallen comrade back on his feet.
Walking back through the room to resume the experiment, I noticed that one of our subjects had some racy images on his computer screen—not porn, exactly, but a music site where the videos were pretty steamy. Worrying about outside-the-lab influences on him, I noted his code number when he went for his blood draw, and when I checked later sure enough, his oxytocin levels—it’s a reproductive hormone, right?—were through the roof. Given the “external stimulus” he’d been receiving, we had to toss out his data.
Over the next year and a half, we repeated this vampire version of the Trust Game fourteen times. Again, this was make
This is what we found.
First, we saw the high levels of trust and trustworthiness we anticipated, the morally benign behavior that defies rational self-interest and the Nash Equilibrium. We also found significant economic rewards for virtue—which, given my work on the factors that make societies prosperous, came as no surprise. A-players who decided to trust their anonymous partners walked away with an average of $14, which was a 40 percent go-ahead over the $10 they started out with. The B-players who received money from a partner who trusted them to reciprocate left the lab with an average of $17, which was a 70 percent increase. So positive social behavior was increasing the prosperity of our little population of undergraduates, even if the benefits were not distributed with perfect equality.
But what was going on at the level of blood and brain? In this first vampire Trust Game we were truly winging it, so we had to be cautious about over-interpreting and drawing unwarranted conclusions. (Plus, I was an economist! What did I know about blood values?) That’s why we kept doing the study again and again until we had a ridiculously large sample on which to base our conclusions. And what we found was a dramatic and direct correlation between a person’s level of oxytocin and her willingness to respond to a sign of trust by giving back real money.
Then again, multiple factors can feed into almost any biological or behavioral response. So to pinpoint what was—and what was not—causing the virtuous behavior, we measured nine other hormones that interact with oxytocin to see if they were having any influence. These included the male hormone testosterone, as well as the female hormones estradiol and progesterone. Then we correlated all the physiological data with personality survey questions such as “Do you look through your roommate’s stuff when they’re gone?” and “How much do you drink?” and “How often do you go to church?”
After enough analysis to make your forehead bleed, we found no link between any of these other factors and the reciprocal generosity we were seeing. The only factor that could explain the behavior was the increase in oxytocin. But how did we know it was trust that was driving the oxytocin response? How could we be sure it wasn’t just the receipt of money?
To check this out, we ran a control experiment in which all the circumstances were the same, except for the element of one human being’s faith in another human being. Rather than have the A-player decide on his own whether or not to transfer money to B, we set up a way to make the allocation random. In keeping with my low budget, indie-filmmaker way of doing science, I drove over to Walmart and got a clear plastic container, covered the outside with duct tape, and filled it with Ping- Pong balls numbered from zero to ten. For this randomized, it’s-not-about-being-trusted version of the game, I would call out an ID number, and one of our A-participants would publicly (and randomly) pull out a numbered ball. The amount on it would then be subtracted from his account, then tripled in the account of a randomly selected B-participant. The transfer of money was still taking place, but there was no human bond at the root of it.
When participants received transfers of money based on some
As icing on the cake, when the original transfer was based on trust, there was also a directly calibrated correspondence between the size of the transfer and the size of the recipient’s response. The more money sent, the higher the oxytocin level; the higher the oxytocin level, the more money given back to player A. When the money came from a random transfer, there was no correlation at all between the level of oxytocin and how generous (or not) the B-player chose to be.
We had just discovered the first non-reproductive stimulus for oxytocin release in humans. Which made me very happy for a variety of reasons, some of which involved my frustrations with the profession in which I’d been working.
The Forgotten Bond
In its “physics envy,” mainstream economics had embraced mathematics to the neglect of any real interest in human nature. This, despite the fact that economics actually came into being as an offshoot of moral philosophy. And the central question of moral philosophy—whether human beings are fundamentally good or evil—has to be the longest-running debate since debates began.
Not too long after Moses picked up the Ten Commandments on Mount Sinai, the Psalms described humanity as being “a little lower than the angels.” Arguing for the other side, the Roman playwright Plautus declared that “man is wolf to man.” Philosophers, preachers, and politicians have been going at it ever since, offering theories to pin down our moral core that range from the medieval idea of original sin, to the seventeenth-century idea that our natural state is “the war of all against all,” to the Romantic idea that we are born a blank slate upon which all manner of goodness might be written if only we have the right environment in early childhood.
And this is not merely some academic dispute. This is a debate with consequences because each contending theory competes for influence in our laws, our cultural norms, and our social policies.
Two hundred and fifty years ago, an obscure professor at the obscure University of Glasgow published a book called The Theory of Moral Sentiments, arguing that benign and generous behavior arises from our feelings of attachment to others. He said that seeing others in distress creates a bond that he called “mutual sympathy.”
In hindsight, this seems almost self-evident. We know that seeing others in distress can have such an immediate force that it makes soldiers throw themselves onto a grenade to shield their buddies from the blast. Sometimes it compels ordinary people to jump down onto the subway tracks to save a complete stranger from being crushed by an oncoming train.
Yet The Theory of Moral Sentiments created such a stir that students from all over Europe suddenly flocked to Glasgow to study with its author. Overnight the obscure professor became one of the intellectual rock stars of the eighteenth century, even though, with bulging eyes and neurotic twitches, he hardly fit the part. He lived with his mother, and he was so absentminded that he often got lost in the woods, talking to himself, dressed only in his underwear. Still, the concept of mutual sympathy was such a bolt from the blue, and his book such a hit, that he was able to travel grandly and lecture and hobnob with the likes of Voltaire and Benjamin Franklin for the rest of his days.
So what was all the fuss about? Well, for centuries, most moral thinking was like my mother’s, bound up in original sin and the fall of Adam. But here was a theory to explain moral behavior that was not all about reining in our “natural” depravity. This theory did not assume, like the seventeenth-century philosopher Thomas Hobbes, that our natural state was “the war of all against all”; nor did it rely on a higher authority, or on a mystical sixth sense, or on rational calculation and restraint to help us overcome our dog-eat-dog proclivities. Instead, The Theory of Moral Sentiments suggested that con
Most secular philosophers had maintained something very much akin to the church’s dismal view of our natural inclinations as well as a similarly top-down approach to getting us to shape up. The only difference was that instead of the God of Wrath threatening us into submission, the top-down force that philosophers saw struggling to impose control was human reason. Plato described the mind as a charioteer trying to rein in the body’s wild, animal impulses, which he characterized as spirited horses. A couple of thousand years later, Pure Reason picked up an even more zealous advocate in the person of German philosopher Immanuel Kant.
In Kant’s view, the only thing that makes us human and free is to act in accordance with the rules we give ourselves, devised through reason. The most fundamental of these rules, what he called the Categorical Imperative, says that to arrive at the good, you must always act as you would if your action were to become a universal law. But where the purity of Kant’s Pure Reason may have jumped the rails was in saying that for any action to be truly moral, it must be done entirely for the sake of the moral law. If we act morally because it feels good to be virtuous—that doesn’t count. And no exceptions, regardless of outcome. If lying violates the universal law, then you absolutely must never lie, even if a psycho killer is after your friend and telling the truth about his whereabouts will lead to his death.
If this line of pure reasoning seems a bit cold and impractical, that’s only one of the many problems with top-down approaches in general. The ones that, like my mother’s, rely on religious teachings bump into the obvious fact that there are somewhere around four thousand different religions in the world, each adding its own special rules to the basic guidelines for pro-social behavior. Throughout history, nothing has led to more bloodshed and ruthless brutality than conflicts among these differing approaches to God. Which is precisely why the secular philosophers tried to rise above all that discord and find universal answers through reason. But in that effort, the philosophers carried over the same contempt for our biology that often characterizes religion. The effort to leave “mere flesh” behind depends on the notion that the mind—and the will, and the soul, and the indomitable human spirit—somehow stands apart from the body. Which is a view that modern science has proved—sorry, Mr. Kant—to be just plain wrong.
We are biological creatures, so everything we are emerges from a biological process. Biology, through natural selection, rewards and encourages behaviors that are adaptive, meaning that they contribute to health and survival in a way that produces the greatest number of descendants going forward. Oddly enough, by following that survival-of- the-fittest directive, nature arrives at many of the same moral conclusions offered by religion, namely, that it is often best to behave in a way that is cooperative and, for want of a better word, moral. Nature simply gets to the same place by following a different, and perhaps more universal, path.
The notion of mutual sympathy was much more human-centric than anything that had come before, just the kind of moral philosophy that the budding Romantic Movement, poised to give the world the Noble Savage and The Rights of Man, could get behind in a big way. If much of human history appeared driven by the ruthlessness that obsessed thinkers like Hobbes, perhaps it was because of specific influences on the system. Alter the nature and extent of those influences, and you might alter the moral response.
The eighteenth century was still a very long time before science could contribute much to a discussion of behavior, so our nerdy professor from Glasgow was understandably a little vague about how this system of mutual sympathy operated. Still, we see something very much like it—we call it empathy—driving moral conduct in thousands of little kindnesses every day. Every day, all over the world, it compels billions of people to share what they have with others whom they care about.
And yet, after the initial surge of enthusiasm, mutual sympathy lost the battle of the big ideas in moral philosophy. In part, it was overshadowed by Kant’s ideas about Pure Reason, offered at about the same time. But there was another intellectual hammer coming on the scene with even greater impact.
Romanticism may have captured the arts and, to some extent, the politics of the late eighteenth century, but in the workaday world, the real spirit of the age was a new idea called Capitalism. Enterprise was on the rise, and tradition was in decline. Men of wealth and power were forming trading companies and building factories, meanwhile dispensing with medieval ideas like the fair price and noblesse oblige. Once their big machines were ready to roll, they closed the open grazing land so that tenant farmers would have no choice but to go to work in the mills.
The man that Capitalism turned to for hardheaded, unsentimental moral guidance to this new age of enterprise was Adam Smith, author of The Wealth of Nations. The irony is that Adam Smith is also the same head-in-the-clouds professor whose first book had put human feeling at the center of moral discourse. It was, in fact, the leisure he earned by way of Moral Sentiments’ success that enabled him to write The Wealth of Nations, which had an impact that, by comparison, makes Moral Sentiments look like a dud.
Many factors account for its electric effect, but one sentence, quoted time and time again over the past two centuries, conveys the basic wallop:
It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.
At a time when the West was moving beyond ideas of sin and the limits those ideas imposed, here was a real game changer. In the medieval world, pursuing personal gain fell under the rubric of pride or envy or greed. But now, according to the already rock- star-famous Adam Smith, personal gain could be filed under a new linguistic category called one’s “interest,” and it was not a vice at all but a virtue! Getting ahead was no longer seen as a result of unruly passions. Now in the Age of Reason, getting ahead was simply the reasonable thing to do. And best of all, the rational and reasonable pursuit of personal gain made the wheels go around that put more food on the table for everyone.
Ever since those fateful words of Smith’s first appeared, what’s been lost in the general enthusiasm for the self-interested butcher and baker is how that line of text fits within the context of Smith’s larger intellectual enterprise, which had far more to do with the virtue of individual initiative than any endorsement of self-serving behavior. All the same, Smith was embraced and venerated as the founder of a new science called economics. At the same time, his status as a moral thinker fell into decline.
For economists, Smith’s line about “their own interest” represented not only a shift in values, but the possibility of a new and comprehensive way of explaining behavior. And that’s how Economic Man (also known as Homo economicus) was born, the highly rational, self-serving human who lives in economics textbooks and in economic models, and who—at least for theoretical purposes—is driven by anything but mutual sympathy.
As an economist who went on to study moral behavior, I’ve always had a soft spot for Smith, the misunderstood moralist who went on to found economics. Like him, I’ve always preferred to study actual Homo sapiens rather than theoretical Homo economicus. I was always drawn to the real-world underpinnings of economic issues—things like rates of childbirth, generational demographics, and the amount of resources parents invest in each child. Don’t all parents love their children? Then why don’t all parents express that love by trying to give their kids the best possible preparation for life? Usually it’s because they don’t have the time and resources, and that’s usually because they have more kids than they can handle. It turns out that fertility and parental investment—biological issues—profoundly affect economic outcomes.
It was this kind of work on fertility and demographics that prompted me to investigate other interpersonal factors affecting prosperity, the most compelling of which was trust. I spent more than a year developing my model demonstrating that the level of trust in a society is the single most powerful determinant of whether that society prospers or remains mired in poverty. Being able to enforce contracts, being able to rely on others to deliver what they promise and not cheat or steal, is a more powerful factor in a country’s economic development than education, access to resources—anything.
In 2000 I attended a conference on economics and law held by the Gruter Institute for Law and Behavioral Research. This was in the summer off-season up at a ski resort in the Sierra Nevadas, and on the long shuttle ride from the Reno airport I found myself seated next to the only other passenger not decked out for a mountain biking trip. We got to talking—this other passenger was indeed heading to the same conference—and that is how I met anthropologist Helen Fisher, author of such books as The Anatomy of Love and Why We Love. We started comparing notes on our research, and I mentioned my studies of parental investment, and after a while she asked me, “Have you ever thought about studying oxytocin as a factor in all this?”
Oxytocin? I’d never heard of it. But when she described it as a bonding chemical, I took the bait.
Later, back in my hotel room, I went on PubMed and soon learned that oxytocin is a small molecule, or “peptide,” that serves as both a neurotransmitter, sending signals within the brain, and a hormone, carrying messages in the bloodstream. In 1906, when Sir Henry Dale first identified it in the pituitary gland, he gave it a name by combining the Greek words for “quick” and “childbirth.” Obstetricians and gynecologists came to know it well because it controlled the onset of labor and the flow of milk for breast-feeding. But beyond the realm of reproduction, researchers in medicine apparently never gave it a thought.
I was intrigued, though, especially when I found a large body of oxytocin research done by the kind of biologists who studied small, furry animals. Injected directly into the brain of some species (not allowed with humans, by the way), oxytocin worked like a mythical love potion, creating an instant and powerful monogamous attachment. In the highly social world of moles, voles, and prairie dogs, it was shown to regulate all forms of attachment, including bonding to a mate, tolerance of neighbors in the cage or colony, even tolerance of one’s own offspring. By inhibiting oxytocin, researchers had induced mothers to shun their off spring; when other scientists induced the release of oxytocin, it caused mothers to nurture offspring not their own, just as nursing dogs occasionally adopt orphaned kittens.
And it was the off- and-on quality of the hormone that intrigued me even more. In nature, oxytocin surges when signals from the environment indicate that it’s safe to relax and nuzzle. When those signals wear off, or when they’re countermanded by some other signal—such as danger—it’s time to get back in the game of gnashing teeth and competing over resources.
Reading about all this research in the biology journals, I couldn’t help thinking that the oxytocin signal—a calm but transient feeling, heavily dependent on an assessment of safety in the moment—sounded a lot like trust. And that’s when the really interesting possibilities began to tumble out. Bonding . . . trust . . . parental investment . . . These seemed like entirely different concepts, until you thought about the underlying mechanism.
What if bonding in voles and trust in humans were actually based on the same chemistry? What if oxytocin was, in fact, the chemical signature for that elusive bonding force Smith had called mutual sympathy? Then, thinking back to my research on the prosperity-enhancing power of trust, I had to laugh. What if this “Moral Molecule”—if that’s what oxytocin was—is also an essential element in what Smith called the wealth of nations?
This was a eureka moment for me, where the possibility of so many ideas coming together made me a little giddy. If I could demonstrate a direct link in humans between oxytocin and concern for others, then it would mean that this notion of mutual sympathy was not just an abstraction or a pre-scientific metaphor like “the four humors.” I could well imagine that, with a few million years of evolutionary refinement added on, the same basic system that allowed primitive creatures to let down their guard and mingle, then resume wariness when it was time, could help modern humans walk the line between competition and cooperation, benevolence and hostility, maybe even what we call good and evil. And given that trust was the number-one factor in helping societies move toward greater prosperity . . .
Well, it was one hell of a theory, but a theory doesn’t get you much unless you can prove it. So that’s when I started retooling to add blood and brain work to my portfolio of research techniques. Spending time in my father’s engineering lab as a kid, I’d learned the value of tinkering and exploring outside the usual boundaries. So I went back for training in neuroscience at Massachusetts General Hospital. I started hanging out in the neurology department at the nearby medical school, attending lectures and grand rounds. I was already a full professor—but in economics, not neuroscience. So this new interest of mine meant starting over.
This was about the time that I mentioned my new research program to a friend who was an ob-gyn. “That’s the stupidest idea in the world,” he told me. “It’s a female hormone.”
“So what? And by the way . . . men make it, too.”
“But it’s trivial. Parturition. Lactation. That’s all it does.”
I simply had to trust my instincts. If I was wrong, I knew that at the very least I was testably wrong, which meant we’d get an answer, yes or no.
Eventually I wound up with a huge freezer filled with blood in my academic office in the economics department. This prompted a dean of mine to refer to what I was doing as “vampire economics,” but I didn’t mind taking some razzing. I was determined to find out if this idea of mutual sympathy had any real substance, and the only way to do that was to get down under the skin. Which is what we did, beginning with those Trust Games at UCLA.
What had once seemed like such a stupid idea—having benign, pro-social behavior being triggered by a reproductive hormone responding to trust—now seemed too good to be true, almost like some scientific version of a parable you’d learn in Sunday school. Then again, if oxytocin allowed voles to get along better with one another in their colonies, why not humans? For social species, if moral behavior is more adaptive than ruthless behavior, then it only made sense that there would be a biological basis for it. And where would it be more likely to originate than in reproduction, where all bonds and attachments begin?
I thought about this biological imperative as I watched our volunteers leave the room that first morning at UCLA. They had to stop by a cashier to pick up the money they had earned, and—this being a population of young, single undergraduates—there was a fair amount of heat being generated as the kids checked out one another.
I listened in on their conversations, which included a lot of “Which one were you? What’d you do? How’d you make out?”
Not surprisingly, I didn’t hear a single guy say, I was a total bastard. I took everything I could get and gave absolutely nothing back.
Nor did I hear a single girl say, Yeah. I tend to be cold and withholding, and I don’t really trust anybody. So I just kept my ten bucks. Screw ’em.
Based on the self-reports I heard, you’d think everyone in the room was applying to Teach For America, helping out in soup kitchens, and reading to the blind. Every student I heard claimed to have been a shining example of moral probity, either magnanimously trusting or generously trustworthy.
Which prompted two additional observations.
The first is that pro-social behavior is a sexual come-on. In fact, gift-giving—the display of generosity—is rule number one for courtship in all human societies, and in many animal ones. Who wants a mate who’s going to be selfish and self-interested?
The second observation is that people will lie like crazy to impress a potential mate. Then again, human beings are extraordinarily good at identifying liars. To make a claim of being trustworthy credible over time—as opposed to, say, a con man’s brief encounter—one really has to be trustworthy.
So it makes sense that nature would go with that old Russian adage: “Trust, but verify.” Oxytocin is the touch-and-go molecule that makes it possible to walk such a fine line: Trust and bond with someone when the right stimuli are present, but be prepared to return to wariness once the stimulus fades. How oxytocin came to be that carefully modulated governor of trusting behavior, and how trust led the way to more complex social behaviors such as empathy, is a much richer story—one that takes us back in time, and into the deep blue sea.
To keep up-to-date, input your email address, and we will contact you on publication
Please alert me via email when: