A non-theist, NT, has asked Science about any moral guidance that it can offer. NT has explained that he isn’t expecting hard rules, but he is hoping for some moral guidelines that are grounded in the scientific view of the world. The topic is not a comfortable one for Science, though, and his response is crotchety.
Science: If you’re looking for me to tell you the meaning of life, as if I were some guru sitting on a mountaintop, you’ve come to the wrong place. And believe me, you don’t want scientists insisting that people should live this way or that way, because that can take you places you don’t want to go. All I can give you is what I know so far about objects like molecules and forces like gravity and how they impact each other, stuff that I can make educated guesses about and then confirm or deny with experiments.
NT: I hear you. But let me ask a question about living things. Isn’t it true that all living things try their best to stay alive under nearly all circumstances? I mean, that’s basic and obvious but it’s important.
Science: Don’t get me started. It drives me nuts when people talk about the “survival instinct” like it was something that kicks in when you’re about to drown or someone is coming at you with a knife, as if we are working on our survival only in an emergency. I’m telling you, almost everything you do is closely related to survival—well, I like the phrase “surviving and thriving” better—the “thriving” part covers having kids, which is how life keeps going. You sleep, right? You eat and drink? You stay warm, stay close to friends and family, worry about enemies or dangerous situations, like losing your job? That’s all survival. You’re creative and like good wine? That’s your brain amped-up over how to survive looking for new material to work with. Every cell in your body is working all the time on staying alive and getting all the energy it needs and staying safe. Believe me, surviving and thriving is a full-time job for you and the squirrel and the gnat and the weeds in your back yard.
NT: Thank you. I like all that. Let me show you where I think it might lead. We humans agonize and speculate a lot about purpose. It’s a gnawing question. Why are we here? What are we supposed to do with all our talent and energy, knowing our time is limited? I’m suggesting that our purpose is to survive and thrive and to help other living things do the same. I think our big brains sometimes loose touch with that. And if our purpose is to survive and thrive, that is a statement about values and morality too–being alive, sustaining life, enhancing life—any and all life, not just humans—has positive value and losing or taking life is negative, more or less bad. That’s broad and crude, but I think it’s a foundation of sorts.
Science: What you’re talking about reminds me of Aristotle, who argued that what is good is what something aims at, according to its nature, and what is positive or negative for the thing derives from that. For example, the aim of a scientist is to contribute to the accurate description of the natural world, so a good scientist is one who does that, and what is good for that scientist are things that contribute to his achieving that goal. I should add, by the way, I’m not an expert on Aristotle as a philosopher, but he himself was a hell of scientist, for his time. You should look him up.
NT: I’ll do that, thanks. And yes, he thought that the goal of humans is to flourish, to be happy, and that happiness involves the unique human function of being rational. I’m expanding all that, I suppose. Not just human nature but all of nature. Aristotle meets Darwin. But we need to talk more about the moral guidance part.
Science: Yep. And I think you’re already up to your elbows in alligators
Hi, enjoyed your “conversation” with NT. I’d like to add my two cents worth. I think one important moral example that science sets for human can be found in workings of the scientific method. As scientists, we look very hard to discover what is “really” there and how things “really” work. We speculate about mechanisms, set up experiments to test our speculations, develop theories on the results we can reproduce, and then see how those theories might illuminate how other things work, related or unrelated, at which point the process starts all over again. This alone seems like a sterling way to live one’s life–looking for reproducible evidence of the outcomes of our choices in order to improve the outcome of future choices.
But to me, the scientific method really shines when things go wrong. In science, we acknowledge that we probably can’t know anything with perfect certainty, and that much of what we think we know is transient and approximate. When one of our tests fails miserably to reproduce the results we expected, scientists react in a way that rarely occurs in most normal human interactions–they change their minds about the explanations they had developed! Most people (including scientists–they *are* human) do just the opposite when they are flummoxed by some event they thought they understood. They *fabricate* an explanation that preserves the perceived validity of their original belief, despite clear evidence to the contrary.
And this, then, is what science can do for us on the topic of morality: it stands as a shining example of how to avoid self-delusion. It predicates morality on actual outcomes, rather than on wished for outcomes. I’m not saying that individual scientists don’t delude themselves, but they know the *principle* even if they fail to follow it in all things, and for most part, they *aspire* to follow it even knowing they might sometimes fail. They even build special tools to detect this very event.
How does self-delusion or its antidote apply to morality? I see it as a matter of rationality. How can we, as individuals or as societies, claim to know “what is right” (and demand a particular behavior of others) if we ignore the evidence of the outcomes of our own experience? In the past, some societies believed it *moral* to label as “evil” people who behaved outside the norm of society, even after science began to understand the evidence both for wide ranges of “normal” behavior and for the organic roots of behavior. And it is currently “moral” in some places to teach young teens to “just say no” to premarital sex, despite clear evidence that those who “take the pledge” are far more likely to indulge in earlier and riskier sex, and to pay the consequences of unplanned pregnancy and disease. Whatever you believe about premarital sex, the *evidence* shows that “abstinence only” sex ed simply does not produce the desired effect. How can it be “moral” to impose a rule that doesn’t even accomplish what you claim it will? These and a hundred other examples show that we can behave more morally if we use evidence to illuminate our decisions, and if we embrace the possibility (or more accurately, the *likelihood*) of error and develop the habit of willingly changing our minds when presented with legitimate evidence.
Only then, in my view, can we dare to impose rules on others, with the clear understanding that we, as a society, will likely have to some day go back and redefine those rules in light of new evidence.
Thank you for your response. I agree that the combination of evidence-based approaches and a willingness to change one’s mind are important components of efforts that would be described as moral. I hadn’t thought of scientific method in quite that light before. It brought to mind an article I was reading yesterday, Malcolm Gladwell’s “The Treatment,” on the frustrations of drug trials in oncology. The researcher says in the close of the article, after yet another failure and in contemplating the next trial, “I am a scientist. I just hope that I would be so romantic that I become deluded enough to keep hoping.” Sounds like, in the best sense, a scientist of the kind you mention who fully appreciates the difference between delusion and reason.
I have a couple of reservations, though. One is that the emphasis on rational method leaves open the matter of whether the choice of the goal itself is moral or not. A person whose goal is to make lots of money may be empirical and flexible to a fault but the question of morality would still be open.
Also, you describe the role of the scientific approach as useful to the morality of both individual and social efforts. The examples of the latter—the judgment of other people as evil and the downsides of the abstinence message—are convincing. But I don’t see the fit to a person’s individual and personal life as much. An individual’s inclination to ignore results might result more in lack of success or in unhappiness than immorality. A person who believes privately and rigidly that he is inferior or superior to his friends in some way will experience the discontent of off-kilter expectations but that result wouldn’t be immoral. A girl who decides on her own to be abstinent and then becomes pregnant and drops out has fallen into a sad situation but her choice and effort towards abstinence were not immoral. If a counselor pressured her about abstinence as part of a program, then that is immoral on the counselor’s part—perhaps for more than one reason. Perhaps a person’s actions must have consequences, potential or real, for someone else for them to be moral or immoral.
The issue in general that you raise is important to me because in my retirement I have been active in a variety of poverty-reduction efforts both in my community and, through online and classroom advocacy, in Africa. One thing for sure about trying to reduce poverty is that it is devilishly difficult to figure out if you are succeeding. Some efforts succeed, sometimes money sent or policies changed have either no impact or serve to increase corruption, sometimes the short-run and long-term consequences contradict each other, and on and on. There is helpful, rigorous, control-group research from organizations like Innovation for Poverty Action to find out what works and what doesn’t, but sometimes successful strategies become political footballs and go awry. As a donor and advocate, I would like to have more evidence than I do, but in the morass that is poverty, that evidence is often difficult to come by—and an improved approach difficult to implement. It can be tempting to think that one’s efforts are going nowhere or are even doing more harm than good. But you do your best to stay informed, you’re cautious about what you recommend to others, and you go ahead anyway with what you know.
Thanks again so much. More to think about. Hope to hear from you again.
Hi again. I appreciate your reservations. I share them at least to some extent. I certainly acknowledge that our neurology sets us up to make poor judgments if we don’t pay close attention. We can greatly benefit by educating ourselves on how our brains work.
I think the difficulty you see may be at least partly addressed by recognizing and breaking through the acquired limitations of our evaluations. I think we tend to make most judgments in what amounts to a narrow knowledge vacuum, relatively speaking, while the *outcomes* of our judgments operate in the larger real-life context of society. If you choose to use societal benefits as the measure of morality, you use a different (larger?) set of outcomes to evaluate your choices.
For example, you say:
“A person whose goal is to make lots of money may be empirical and flexible to a fault but the question of morality would still be open.”
I would suggest that if one attempts to view the goal of making lots of money in the larger context of the society in which one lives, one encounters important information not typically factored into personal decision. For instance, we might note that money is made *from other people*. An *empirical* evaluation of the accumulation of wealth might reveal that a lack of balance leads to social discontent, reductions in the quality of life for many, lower educational achievement, disease and starvation, etc etc. There’s plenty of evidence that more people benefit in societies with a lower ratio of highest to lowest income or wealth levels.
Furthermore, an *empirical* evaluation of the quest for wealth would suggest an upper limit beyond which additional accumulation adds little or no real amenity (except perhaps for an abstract sense of “security” which can be addressed in the evaluation process as well), especially when balanced against the cost in time and work required, not to mention the additional burden on others from whom the wealth is accumulated. Reason argues against the quest for “lots of money”.
As for the issue of abstinence only sex-ed, I was indeed referring to the morality of the educators and administrators who choose to ignore evidence and operate on wishful thinking. The girl who becomes pregnant because she never learned about contraception is a victim, in my view, of both their moral lapse and their choice to keep information from her.
On the subject of how to apply this kind of thing to your efforts in poverty reduction (a great thing you are doing, by the way) you might find something of interest in the book The Logic of Failure by Dietrich Dörner. The subtitle is “Recognizing and Avoiding Error in Complex Situations.” I’m not sure if he offers solutions (I haven’t quite gotten to the last chapter on “Now What Do We Do?”) but I have been quite knocked out by his examples of how people fail utterly to consider what after the fact seems so obvious. Which reminds of another book you might find interesting: Everything is Obvious (Once You Know the Answer) by Duncan Watts.
Thank you again. I agree about the importance of the gap between the outcomes that we anticipate and those that, for whatever reasons, we don’t or can’t take into account. And thanks for the book suggestions, which I will certainly check out.