The Winter of Our Discontent and our identity

woodFor the past several years I have required students in my applied ethics course to read John Steinbeck’s book, The Winter of Our Discontent. The story, which takes place in the early 1960s in a fictional East Coast town called New Baytown, is about the moral decline of a man named Ethan. At the beginning of the book Ethan is content and has a reputation for integrity. By the end of the book Ethan has engaged in a number of morally corrupt activities in order to obtain wealth and status.

I like the book because it is one of the best novels showing the dilemma people face when tempted to do bad things and the heartache people inevitably feel because of their actions. Steinbeck takes us into the mind of Ethan as he rationalizes what he does. “What are morals? … Is there a check in men, deep in them, that stops or punishes? There doesn’t seem to be,” says Ethan. That’s a scary way to justify one’s behavior.

The class discussions of this book are always interesting. Students generally agree that the things Ethan did were wrong and that Steinbeck did a good job in showing that happiness in life does not come from doing bad things to get ahead or merely from the acquisition of wealth. However, students differ in the lessons they learned from the book.

Some students drew connections with teachings from their parents and churches about the importance of doing good and being good, even when others around us are not.

Some students felt that experience is the best teacher in life. That is, we learn right and wrong by choosing wrong and then by seeing that it does not get us what we expected. Only rarely can one be “taught” that something is wrong and that such teaching will be sufficient to keep us on the moral high ground. In other words, the only way we can learn that stealing is wrong is to steal, get caught and be punished. Having someone who says they know better and who tells us it is wrong and that we can never “prosper” by stealing is not good enough.

Some students believed in the power of example and of a good role model. If there is someone we admire who behaves ethically, then we might be more inclined to avoid the temptations to lie, cheat and steal. But what if we associate with people who do not value integrity?

An important lesson is where our sense of identity comes from. If we require validation from others, then we will be susceptible to pressures to acquire riches at any cost. That is, we will become like Ethan. You’ll have to read the book to understand why. (The book would carry a PG rating for adult themes and mild language.)

An alternative objective would be to find validation from within, or, better yet, to consider “what thinks God of me?” There is an extensive scholarly literature on the subject of religiosity and identity. Scholars have noted that religions provide a strong effect on the way people see themselves and the world. But that can come at a cost, for example, if one’s religious identity is threatened by intergroup conflict. When one’s religion is attacked, then having an identity too strongly tied to the religion may create a risk that people will take extreme actions in order to protect their identity and worldview (see, for instance, a paper entitled “Religiosity as Identity: Toward an Understanding of Religion From a Social Identity Perspective.” But havin51lkrhfbuelg one’s sense of identity tied to one’s religion is not the same as considering “what thinks God of me?” An excellent religious perspective of this theme is here. I’m also reminded of a wonderful book, You Are Specialby Max Lucado, that makes the same point.

I know I’ve gone off track a bit, since I started this post with Steinbeck’s book. But since my identity is not based on what I think others think of my blogging, I guess it doesn’t really matter.

Doing bad when I think I’m good

A perplexing question in social science research is why people behave in ways inconsistent with their beliefs and their perceptions about themselves. For example, if we know it is wrong to lie, cheat or steal, then why do people lie, cheat or steal? Economists might say people conduct a rational analysis to assess the benefits of lying, cheating or stealing relative to the costs of getting caught or having a guilty conscience and will behave inappropriately when the benefits of doing so outweigh the costs. Psychologists might look to the internalized norms and values of people and say they will lie, cheat or steal when their internal value systems become corrupted. But what if people maintain a strong internal value system but still lie, cheat or steal? Is it possible for me to behave dishonestly and still consider myself an honest person? The question is not trivial. Consider these variations:

I see myself as a person dedicated to healthy eating and exercise but who routinely (over)indulges in sugary and unhealthy foods.

I see myself as a person who values education and a growing intellect but who routinely watches too much television or plays too many games on a smartphone or tablet.

I see myself as a person who is fair and impartial but who regularly denigrates the statements of persons whose political views differ from mine.

I see myself as a person who treats others with dignity and respect but who often hurls insults at political opponents because its just “politics”.

I see myself as a religious person but who rarely attends church or reads scriptures and prays.

I see myself as a competent and careful blogger but who infrequently adds new posts to his blog or reads and comments on the blog postings of others.

A study published in 2008, entitled The Dishonesty of Honest People: A Theory of Self-Concept Maintenance, provides a compelling insight here. According to the authors of the study, people have and want to maintain a particular image of themselves, such as being a person of honesty. A problem arises when people face a decision that can produce a short-term gain but require them to act in a way that is contrary to their self-image or self-concept. When people are torn by competing motivations–“gaining from cheating versus maintaining a positive self-concept as honest”–they will solve this dilemma “by finding a balance or equilibrium between the two motivating forces, such that they derive some financial benefit from behaving dishonestly but still maintain their positive self-concept in terms of being honest.” But how? The trick is to define the behavior in a way that still allows them to maintain the desired self-concept. The authors describe this as malleability. The more malleable the situation, the more likely people will behave inappropriately while still maintaining a positive self-concept. Consider this variation of an example provided by the authors: I might be able to justify taking a $1 notebook from my friend, even if I cannot justify stealing $1 from his wallet to buy the notebook myself. The malleability here comes from my defining this action as “borrowing” rather than stealing, or thinking that because I let my friend use something of mine previously, then my taking the notebook is okay because “this is what friends do.” Of course, there is limit to this rationalization. I might be able to rationalize taking the $1 notebook but probably not taking my friend’s $20,000 car. Thus, malleability and limits set the boundaries within which rationalization occurs.

The scholars conducted experiments to see how people behave when given opportunities to cheat and to redefine how they see themselves. The experiments confirmed their expectations. As summarized by the authors, “people who think highly of themselves in terms of honesty make use of various mechanisms that allow them to engage in a limited amount of dishonesty while retaining positive views of themselves. In other words, there is a band of acceptable dishonesty that is limited by internal reward considerations.” In other words, I can lie as long as I can convince myself it is really not lying. If I can do this easily, then good for me. I get my lie and self-worth too. If I cannot do this easily, then I’ll resign myself to being honest.

So, if we want to reduce dishonesty in society, we need to limit the malleability of contexts in which people might lie, cheat or steal. In other words, we need to make it harder for people to rationalize their unethical behavior that allows them to maintain a positive self-concept even though they are doing wrong. In their study, the authors were able to do this by asking the subjects of their experiments to write down as many of the Ten Commandments as they could remember. Perhaps this means we should be promoting greater religious observance in society.

Lying is still lying, regardless of what we want to call it. Cheating is still cheating. And stealing is still stealing. All our wrong. We need to call it what it is.

Phew! That was a lot of work creating this post. Time for this healthy exerciser to take a chocolate break.

 

 

Morality and neurochemical impulses

Recently I was reminded of a book I read a while ago by philosopher Patricia Churchland entitled Braintrust: What Neuroscience Tells Us about Morality. (A brief video of her explaining the book is here.) The book attempts to explain what scientists have learned about the brain in order to explain how it is that humans developed a sense of morality. One interesting idea she discusses is that the hormone Oxytocin is found in the brain and in the body. It has been shown to promote caring behavior in animals, and it is released during pregnancy, triggering “full maternal behavior” in humans and animals. Oxytocin also promotes trust in humans by “raising the threshold for tolerance of others, and to its down-regulation of fear and avoidance responses,” as demonstrated in experiments in which some research subjects are given a dose of Oxytocin and are asked to play games and interact with others in order to measure trusting behavior. Another interesting discussion is that, at the genetic level, behavior is complex. No single gene can be associated with any unique or specific behavior. In the “Parable of the Aggressive Fruit Fly,” Churchland explains how scientists are able to breed a fruit fly that is 30 times more aggressive than their natural cousins, but genetic differences between them are minor and do not seem to be related to any specific behavior. Rather, differences are in mundane physiological functions.

After discussing these ideas Churchland enters into a discussion of why various philosophers have not really gotten it right about morality and ends with a criticism of religion, or what she calls a “supernatural basis” as the source of morality. She denies the need to rely on God or religion in order to explain morality and how people come to know that something is right or wrong, focusing instead on a neurobiological basis for these. To this end she is particularly critical of religious tenets that imply or state an absolute standard of behavior or morality, such as claims about what someone “ought” to do or be. She focuses especially on the Golden Rule, the Ten Commandments and a God-given conscience. One reason she gives is that religious “absolutes” are just that—prescriptions that are intolerant of specific contexts. Another reason she gives is that absolute standards are invalidated because of the allowance of exceptions, such as when the Lord tells Moses “Thou shalt not kill” (see Exodus 20) and then later commands him to slay Israelites who worshipped false Gods (see Numbers 25). I note the inconsistency in these two objections. She is critical of religious intolerance as well as its tolerance. She also complains that people “with conscience” often advocate conflicting ideals. For example, some people feel it is wrong to eat meat while others feel it is morally acceptable. According to her, this means religion cannot be used to justify claims about morality.

I find her argument highly unsatisfying. If she is correct, then where does this leave us? A world in which morality is relative and where morality is created and defined by neurochemical reactions in our brains? If we live in such a world, then how is it that humans are able to make decisions of right and wrong and come to a consensus about many moral issue? Neurochemicals might explain in part feelings of affection we have for others, but that only accounts for the sociality of humans and animals. It is too far a leap to claim that it also accounts for the ability of humans to engage in complex moral analysis or to make and act on specific moral judgments. It also cannot explain how or why little children understand basics of right or wrong. If you ask a five year old child if it is a good thing or a bad thing to take a toy away from another child or to hit another person, they usually get the right answer (it is a wrong thing). Children have an innate sense of right and wrong that can only be described as a conscience. Neurobiological responses are too primitive to explain this ability of children. To accept Churchland’s view is to equate morality with sociality, and that is clearly insufficient for explaining actual moral judgment.

A stable society requires that humans accept a common morality and sense about what is right or wrong and that they are willing and able to police themselves by exercising moral restraint. This requires a belief or a willingness to believe that there is such a thing as an absolute standard of morality. History has shown repeatedly the horror that humans inflict on others when they disagree on fundamental moral issues and beliefs and adopt a mindset of relativism and situational ethics. The Nazi holocaust comes to mind. (Side note: I just finished Miklos Nyiszli’s book, Auschwitz: A Doctor’s Eyewitness Account, which provides a stunning account of a Jewish doctor who helped the infamous Josef Mengele conduct experiments on prisoners in the concentration camp.)

Personally, I would rather live in a world in which people accepted the reality of a Divine Being and followed His dictates than one in which people acted only according to neurobiological and chemical impulses. It is because people ignore their God-given conscience that immoral behavior and human-on-human atrocities occur.

Utilitarian pushers are a miserable lot

Each spring semester I teach an applied ethics class called “Ethical Issues in Agriculture.” Today we discussed one of the most famous thought experiments in applied ethics—the trolley dilemma (a Youtube.com presentation of the issue is here). In this dilemma, a trolley is running out of control on a track where five men are working. In one variation, you are told you can save the five by pulling a lever to divert the trolley onto another track, where one man works, thus killing him. In another variation, you are told that you can push a very fat man off a footbridge onto the track to derail the train, thus saving the five.

Would you pull the lever to save five while causing the death of one in the first case?  Why? Would you push the man off the footbridge to save the five in the second case? Why?

I have used the trolley problem for many years in class. Most students are willing to pull the lever in the first case, but most are not willing to push the man in the second case. According to students, it is better to save five at the expense of one by pulling the lever, since five versus one seems to be the only pertinent factor in the first trolley case. This is classical utiliarian thinking. Utilitarianism is the idea that a decision is right if a greater good is served, such as more people benefiting than being harmed. Inflicting extreme pain on a person for information that could save thousands would be justifiable under utilitarianism. However, non-utilitarian thinking applies in the second trolley case because there are other things to consider. For example, in the first case all workers have preexisting harm since they are on track, whereas in the second case the man on the footbridge is not in harm’s way; our pushing him introduces him to harm. Diverting the trolley is what saves the five in the first case, whereas the death of the man is necessary in the second case. We also need to consider the rights of the man to decide for himself whether to leap or not–that is, we should not use him as a means to an end without his consent.

What is interesting with the trolley problem is people who use utilitarian thinking in the second case, choosing to push the man in order to save the five.

I read a study a few years ago that shed some light on people who are predominantly utilitarian thinkers. The study is “The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas.” The researchers wanted to know how people who selected a utilitarian outcome to the trolley problem scored on personality assessments. Over two hundred college students were recruited for the study. The study showed that people who consistently adopt utilitarian solutions to moral dilemmas are more likely than others to have indications of psychopathic personalities or to feel that life is meaningless.

Most respondents in the study did not think it was right to push the fat man to save five workers. However, respondents who consistently chose the utilitarian solution to the different variations of the trolley problem also scored high on personality assessment indicators that suggested a high degree of psychopathy, emotional detachment to others, and a sense that life is meaningless. In other words, utilitarian pushers (people who believe it is acceptable to push the fat man off the footbridge) are not pleasant or happy people. In fact, we might even say their psychological profiles are troubling.

It is interesting that economics as a profession pushes the utilitarian framework (choose actions where the benefits exceed the costs). It’s our fundamental way of thinking as economists. Maybe this is why the 19th century historian Thomas Carlyle referred to economics as the “dismal science.”

In case any of you are worried, it’s okay to have an economist as a friend … as long as you don’t take walks along trolley tracks together.

 

Academic gobbledygook

Much has been written about the poor quality of academic writing. Examples include Steven Pinker, a Harvard University psychology professor, explaining why academics stink at writing in the Chronicle of Higher Education, and author Victoria Clayton, describing the needless complexity of academic writing in an article in The Atlantic. Pinker points to literary analysis (e.g., when scholars “lose sight of whom they are writing for”), cognitive science (e.g., when scholars know too much and have “difficulty in imagining what it is like for someone else not to know something that you know”) as well as economic incentives (e.g., because scholars have “few incentives for writing well”). According to Clayton, “Academics play an elitist game with their words: They want to exclude interlopers.”

When are scholars going to get the message?

The following is the first sentence in the introduction of a paper submitted to Agriculture and Human Values: “This paper will explore how environmental documentaries through their use of direct address and creative aesthetics and imaginaries foreground a range of cautionary tales around the ethical importance of modes of food production, waste, and (over)consumption.” The paper concludes with this: “The toxic materiality of the eco-documentary … is a matter of a complex network of social and material effects, involving not only the immediate material of the DVD or film strip, but also the design and mass manufacture of technology, travel and transportation, land use and accessibility.”

I rejected the paper for publication. This is what I wanted to say to the author: “I am rejecting your paper because it is utterly incomprehensible. Too much of it is scholarly mumbo jumbo and academic goggledygook. I do not know what you are saying and don’t want to spend any more time trying to figure it out. Learn how to write clearly and simply before submitting a paper to my journal.”

Of course I was more diplomatic. My response began this way: “Critiques of the food system and assessments of ethical issues relating to food production fit within the aims and scope of this journal. However, I struggle to see the contribution of your paper to the kinds of debates we see published here and in similar outlets …”

Sigh.

Interestingly, dictionary.com gives this definition for goggledygook: “language characterized by circumlocution and jargon, usually hard to understand.” Circumlocution? Really? Merriam-Webster’s is better: “wordy and generally unintelligible jargon.” Maybe Dictionary.com has too many academics working for them.

 

The boiling frog metaphor

We’ve all heard the story. You place a frog in a pot of boiling water and it jumps out to safety. You put a frog in a pot of cold water and slowly turn up the heat and it cooks to death. It’s a great metaphor. If we are unaware of problems that develop slowly, we may never recognize there is something to be concerned about until it is too late.

Recently, a writer describing a contemporary musical number used the metaphor (here) to explain how “people wouldn’t realize they’ve been suckered into a musical until it was too late.”

However, the truth is that the story is not true. You put a frog in a pot of cold water and slowly turn up the heat, the frog gets agitated and jumps out. In fact, it’s really hard to keep a frog still enough in a pot of water to test the theory. Some scientists did this in the 1800s producing mixed evidence for the boiling frog story. Today experts generally agree that the boiling frog story is hogwash, or perhaps better said, “frogwash”.

Economic models, high-priced consultants and ethical analysis

A colleague sent me a ProPublica article that explains how some “professors make more than a thousand bucks an hour peddling mega-mergers.” That’s a lot of money, even by consulting standards. MBA business consultants can charge between $200 and $600 an hour. Top partners in consulting firms might charge between $800 and $1200. A Wall Street Journal article in 2011 reported that top lawyers charged as much as $1000 an hour. But some economists are pulling in $1300 an hour as consultants.

To be fair, in a free market buyers and sellers should be able to negotiate for exchange prices. If someone is demanding $1300 an hour for their services and another is willing to pay it, then there is nothing objectionably wrong about the arrangement.

In this case the economic consultants are hired by firms that want to merge with or acquire other companies. The consultants are tasked with building a strong case, based on solid and objective economic principles and evidence, that the merger is in the interest of the industry, business, consumers and everyone else. What makes the article interesting is not that there are high priced economic consultants. It is that these consultants often get the antitrust analysis wrong. They build the arguments on speculation. They ignore or trivialize inconsistent or contradictory evidence. They use “junk science,” in the words of a Justice Department official quoted in the article.

A cynic might say that companies are paying the economists whatever price they will accept to argue whatever the company wants them to say, regardless of economics. Apologies to my lawyer friends, but isn’t this what lawyers do? So economists are on the same level as lawyers now?

Economics is a science. And economic models, when used appropriately, can provide a degree of objective assessment. The subjectivity comes in determining which economic models to use and what evidence to incorporate into the analysis. The ethical problem arises when the prospect of financial gain (in this case, a $1300 an hour contract) influences which models and what evidence to utilize. As noted by the authors, “The government’s reliance on economic models rests on the notion that they’re more scientific than human judgment. Yet merger economics has little objectivity. Like many areas of social science, it is dependent on assumptions, some explicit and some unseen and unexamined. That leaves room for economists to follow their preconceptions, and their wallets.”

The implication is that government regulators might be convinced a proposal is best for stakeholders (notice I didn’t use the word stockholders) when it is really only in the interest of the company seeking the merger–and comes at the expense of other stakeholders. In the case of a proposed merger between cell phone companies AT&T and T-Mobile, the economic consultant wanted to make this argument: “That even though prices would have risen for customers, the companies would have achieved large cost savings. The gain for AT&T shareholders … would have justified the merger, even if cell phone customers lost out.”

Let’s hear it for the economists.