Doing bad when I think I’m good

A perplexing question in social science research is why people behave in ways inconsistent with their beliefs and their perceptions about themselves. For example, if we know it is wrong to lie, cheat or steal, then why do people lie, cheat or steal? Economists might say people conduct a rational analysis to assess the benefits of lying, cheating or stealing relative to the costs of getting caught or having a guilty conscience and will behave inappropriately when the benefits of doing so outweigh the costs. Psychologists might look to the internalized norms and values of people and say they will lie, cheat or steal when their internal value systems become corrupted. But what if people maintain a strong internal value system but still lie, cheat or steal? Is it possible for me to behave dishonestly and still consider myself an honest person? The question is not trivial. Consider these variations:

I see myself as a person dedicated to healthy eating and exercise but who routinely (over)indulges in sugary and unhealthy foods.

I see myself as a person who values education and a growing intellect but who routinely watches too much television or plays too many games on a smartphone or tablet.

I see myself as a person who is fair and impartial but who regularly denigrates the statements of persons whose political views differ from mine.

I see myself as a person who treats others with dignity and respect but who often hurls insults at political opponents because its just “politics”.

I see myself as a religious person but who rarely attends church or reads scriptures and prays.

I see myself as a competent and careful blogger but who infrequently adds new posts to his blog or reads and comments on the blog postings of others.

A study published in 2008, entitled The Dishonesty of Honest People: A Theory of Self-Concept Maintenance, provides a compelling insight here. According to the authors of the study, people have and want to maintain a particular image of themselves, such as being a person of honesty. A problem arises when people face a decision that can produce a short-term gain but require them to act in a way that is contrary to their self-image or self-concept. When people are torn by competing motivations–“gaining from cheating versus maintaining a positive self-concept as honest”–they will solve this dilemma “by finding a balance or equilibrium between the two motivating forces, such that they derive some financial benefit from behaving dishonestly but still maintain their positive self-concept in terms of being honest.” But how? The trick is to define the behavior in a way that still allows them to maintain the desired self-concept. The authors describe this as malleability. The more malleable the situation, the more likely people will behave inappropriately while still maintaining a positive self-concept. Consider this variation of an example provided by the authors: I might be able to justify taking a $1 notebook from my friend, even if I cannot justify stealing $1 from his wallet to buy the notebook myself. The malleability here comes from my defining this action as “borrowing” rather than stealing, or thinking that because I let my friend use something of mine previously, then my taking the notebook is okay because “this is what friends do.” Of course, there is limit to this rationalization. I might be able to rationalize taking the $1 notebook but probably not taking my friend’s $20,000 car. Thus, malleability and limits set the boundaries within which rationalization occurs.

The scholars conducted experiments to see how people behave when given opportunities to cheat and to redefine how they see themselves. The experiments confirmed their expectations. As summarized by the authors, “people who think highly of themselves in terms of honesty make use of various mechanisms that allow them to engage in a limited amount of dishonesty while retaining positive views of themselves. In other words, there is a band of acceptable dishonesty that is limited by internal reward considerations.” In other words, I can lie as long as I can convince myself it is really not lying. If I can do this easily, then good for me. I get my lie and self-worth too. If I cannot do this easily, then I’ll resign myself to being honest.

So, if we want to reduce dishonesty in society, we need to limit the malleability of contexts in which people might lie, cheat or steal. In other words, we need to make it harder for people to rationalize their unethical behavior that allows them to maintain a positive self-concept even though they are doing wrong. In their study, the authors were able to do this by asking the subjects of their experiments to write down as many of the Ten Commandments as they could remember. Perhaps this means we should be promoting greater religious observance in society.

Lying is still lying, regardless of what we want to call it. Cheating is still cheating. And stealing is still stealing. All our wrong. We need to call it what it is.

Phew! That was a lot of work creating this post. Time for this healthy exerciser to take a chocolate break.

 

 

Utilitarian pushers are a miserable lot

Each spring semester I teach an applied ethics class called “Ethical Issues in Agriculture.” Today we discussed one of the most famous thought experiments in applied ethics—the trolley dilemma (a Youtube.com presentation of the issue is here). In this dilemma, a trolley is running out of control on a track where five men are working. In one variation, you are told you can save the five by pulling a lever to divert the trolley onto another track, where one man works, thus killing him. In another variation, you are told that you can push a very fat man off a footbridge onto the track to derail the train, thus saving the five.

Would you pull the lever to save five while causing the death of one in the first case?  Why? Would you push the man off the footbridge to save the five in the second case? Why?

I have used the trolley problem for many years in class. Most students are willing to pull the lever in the first case, but most are not willing to push the man in the second case. According to students, it is better to save five at the expense of one by pulling the lever, since five versus one seems to be the only pertinent factor in the first trolley case. This is classical utiliarian thinking. Utilitarianism is the idea that a decision is right if a greater good is served, such as more people benefiting than being harmed. Inflicting extreme pain on a person for information that could save thousands would be justifiable under utilitarianism. However, non-utilitarian thinking applies in the second trolley case because there are other things to consider. For example, in the first case all workers have preexisting harm since they are on track, whereas in the second case the man on the footbridge is not in harm’s way; our pushing him introduces him to harm. Diverting the trolley is what saves the five in the first case, whereas the death of the man is necessary in the second case. We also need to consider the rights of the man to decide for himself whether to leap or not–that is, we should not use him as a means to an end without his consent.

What is interesting with the trolley problem is people who use utilitarian thinking in the second case, choosing to push the man in order to save the five.

I read a study a few years ago that shed some light on people who are predominantly utilitarian thinkers. The study is “The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas.” The researchers wanted to know how people who selected a utilitarian outcome to the trolley problem scored on personality assessments. Over two hundred college students were recruited for the study. The study showed that people who consistently adopt utilitarian solutions to moral dilemmas are more likely than others to have indications of psychopathic personalities or to feel that life is meaningless.

Most respondents in the study did not think it was right to push the fat man to save five workers. However, respondents who consistently chose the utilitarian solution to the different variations of the trolley problem also scored high on personality assessment indicators that suggested a high degree of psychopathy, emotional detachment to others, and a sense that life is meaningless. In other words, utilitarian pushers (people who believe it is acceptable to push the fat man off the footbridge) are not pleasant or happy people. In fact, we might even say their psychological profiles are troubling.

It is interesting that economics as a profession pushes the utilitarian framework (choose actions where the benefits exceed the costs). It’s our fundamental way of thinking as economists. Maybe this is why the 19th century historian Thomas Carlyle referred to economics as the “dismal science.”

In case any of you are worried, it’s okay to have an economist as a friend … as long as you don’t take walks along trolley tracks together.

 

Prisoner’s Dilemma in the classroom

The Prisoner’s Dilemma is a model that illustrates a conflict between the interests of individuals and the interests of those individuals as members of a collective or group. In most versions of the game, two or more persons can cooperate and receive a collective reward that is greater than the sum of individual rewards they could earn if they choose not to cooperate. The incentives of the game are such that the persons have an individual incentive not to cooperate, thus making them collectively worse off had they chosen to overlook their individual interests and instead think as a group. The game is famous in economics and other social sciences. Wikipedia has a lengthy discussion of the game, its refinements and implications here.

Even though the Prisoner’s Dilemma has been around for decades it is still a fun game to play with students. In my microeconomics class today I offered the following opportunity for the class to earn extra credit:

“You can earn extra credit by selecting the amount of extra credit points you want. However, if more than 4 of you select option B, then the entire class will receive 0 extra credit points.”

Option A was to earn 1 point extra credit.
Option B was to earn 4 points extra credit.

I use a web-based student response system so that students could register their choice on their cell phones and I would see the results immediately. Not surprisingly, of the 180 in class today, 10 chose option B, leading to no extra credit for the class. When I gave the class a chance to do it over again and even talk to each other, the number who chose option B increased to 17.

The incentives to choose option B are pretty strong here — getting 3 more extra credit points than one could get by cooperating with everyone else in the class and getting just 1 point. Even when I changed the payout structure so that option A gave 3 points and option B 4 points, there were 6 students who still chose option B, thus negating the extra credit opportunity for everyone.

What I find interesting here is not that there were some students who chose option B but that so many in the class chose option A. At least 90 percent of students were willing to forgo their individual interest of choosing option B in order to cooperate for the collective good.

In economics we teach that when people pursue their self-interest things will work out the best for everyone. But sometimes they don’t. Sometimes the pursuit of one’s interests can be damaging to others and the collective whole. Why does self-interest work in some cases but not in others? And when the incentives for collective action are not ideal, what can we do to encourage or promote more cooperative thinking and behavior?

Russell Crowe, in the movie A Beautiful Mind, played the mathematician John Nash who developed this idea. He explains the problem and solution nicely in this clip from the movie.

I asked my class these questions and got a lot of interesting responses. Because the student response system I use saves student responses, I can list some of them here:

“Anonymity is the problem”

“People only act in their self interest and don’t want to work as a whole for the better of everyone”

“so basically we need to be communists in order for this game to work”

“People are greedy”

“people think they deserve it more than others”

“Throw tomatos (sic) at the people who chose B”

“you do what you have to do”

“Take away the second option”

“build a wall make the people who picked B pay for it”

“this game don’t work cause we got more than 4 selfish people in class”

“Not as many laws and restrictions”

“Because people think that everyone else will pick A and that they will end up getting more when in reality they hurt everyone else”

“Need more communication and honesty”

“Sometimes selflessness is the answer”

“If people weren’t greedy then we would at least be able to get one point extra credit”

“All it takes is one bad egg to ruin it for everyone”

“punish those who answered B”

“Communicate with others to achieve extra credit”

“Put people who choose B in jail”

“freshmen think that 1 point if extra credit is actually going to influence their grade”

“do your work maybe you wouldn’t need to pick B”

Resolving the Prisoner’s Dilemma requires careful structuring of the way people interact and enforcement of the formal rules and informal norms we develop to promote cooperation. It also requires that people exercise self-restraint in the pursuit of their self-interest, since no rules or monitoring mechanisms are perfect. We wouldn’t (or shouldn’t) want to live in a society where such rules are perfectly enforceable. How to do this so as to protrct one’s freedom to choose makes for a fun discussion in class.

In the end I gave everyone in the class who chose option A in the last round of the game (in which 3 points were possible) the 3 points extra credit. I don’t know if the class learned much, but I hope they left feeling better about their teacher.

 

 

Bayesian analysis, probabilities of accidents and the Monty Hall Problem

In my research methods class today we talked about the difference between Classical and Bayesian analysis. In Classical analysis you use available statistics to make inferences about something, while in Bayesian analysis you use other information to interpret available statistics.

Consider the following silly but clarifying example: Suppose I show you a clear bowl with 5 red balls and 5 blue balls, and suppose I ask you to close your eyes and pull out a ball. What is the probability that you will successfully pull out a red ball? A classical statistician will say, correctly, 50 percent, since 5 of the 10 balls are red. However, a Bayesian may say something different if he knows something about who put the game together. For instance, if the Bayesian knows that I am a jokester and have a history of gluing red balls to bowls, then the Bayesian will say the probability that you can “successfully” pull out a red ball will be much less than 50 percent.

Here is another perhaps more relevant example: Suppose 60 percent of all vehicle accidents involve drivers using cell phones. Can we conclude that there is a greater than 50 percent chance that someone using a cell phone while driving will be in an accident? A Classical thinker may conclude “yes,” since more than half of all accidents involve cell phones. Politicians think this way, too, because they promote laws that restrict our ability to use cell phones while driving using these kinds of numbers. However, a Bayesian will want to consider other information, such as the percent of all drivers in accidents and the percent of cell phone use by drivers not in accidents.

For example, if 5 percent of drivers are in an accident on any given day and if the percent of non-accident drivers who use cell phones while driving is 30, then what is the probability that someone will be in an accident given that they are using a cell phone? It turns out to be a lot less than 60 percent–about 9.5 percent. (The formula is (0.05)(0.6)/[(0.05)(0.6)+(0.95)(0.3)] for anyone who wants to check my math.) Of course, to make the point that one should not use cell phones while driving, we should calculate the probability that someone will be in an accident given that they are not using a cell phone. This is less than 3 percent. (The formula is (0.05)(0.4)/[(0.05)(0.4)+(0.95)(0.7)].) So, using a cell phone while driving almost doubles the chance of being in an accident, while not using a cell phone decreases the likelihood of being in an accident by about 40 percent. Clearly one is better off not using a cell phone while driving. Wikipedia has a useful discussion of the math behind the analysis here.

These numbers are hypothetical. I do not have actual data on the percent of cars in accidents and the percent of drivers using cell phones, etc. The point is that we can obtain a better analysis by considering all relevant information carefully. That is, it is not always correct to draw conclusions from data we have presented to us. Moreover, biases can impair our ability to understand what is going on around us, unless we are careful in how we draw conclusions. We see the wisdom in this from observing how people behave during presidential elections. A person’s bias in favor of a particular candidate seems to make him or her impervious to evidence that the candidate is a lying and immoral buffoon.

This type of analysis is also helpful when considering medical tests. If 2 percent of the population has a disease and the doctor gives you a diagnosis that you have the disease, then what is the probability you really have it given that the doctor said you did? The answer depends on how accurate the medical test is. For example, if the medical test is accurate 95 percent of the time, then the chance you have the disease is only about 30 percent. In contrast, if the medical test is accurate only 80 percent of the time, then the chance you have the disease is really less than 8 percent. In either case, I would get a second opinion.

85-doorsWe had fun with this example in class: Suppose you are on the game show “Let’s Make a Deal.” Monty Hall, the show’s host, shows you three doors, A, B and C. Behind one is a new car, behind the other two are goats. You are asked to pick a door. You pick door A. Monty opens door B to reveal a goat and then offers to allow you to switch to door C or stay with your choice of door A. Should you switch or stay? Someone asked Marilyn vos Savant, a woman listed in the Guinness Book of World Records as having the highest IQ, this question. She gave her answer in 1990 in a Parade magazine column. It generated thousands of letters, many from PhDs saying she was wrong. Her column and responses are here. It’s funny to read the reactions of so-called academics. Answer the “stay or switch” question first before reading her response. To play the game to convince yourself that she was right, see this online app here. Play it many times by staying and see how often you win. Then play it many times by switching each time to see how often you win. You’ll find that the probability of winning the car doubles from one-third to two-thirds by switching. There’s also an official “Let’s Make a Deal” website.

Given the choice between watching “Let’s Make a Deal” and presidential candidates debate, I’ll place my odds on the game show.

Trash bins, staples and more unintended consequences

Like most university departments and business offices generally, my department has a workstation near the copy machine. It is a large wood table with staplers, tape, paperclips, pens, a cutting board and other items one might need to manage copies, reports, etc. While there have always been the occasional used, bent staple left on the table, I have noticed a substantial increase in the number of discarded staples on the table. Why?

The University of Missouri has initiated a “low waste initiative” (a report about the program in the campus newspaper is here). The intent is to reduce general waste and to promote recycling. Trash cans from office spaces have been removed and replaced with a blue recycling bucket. Attached to the bucket is a small black bin for non-recyclable waste. These black bins are about the size of a large cupholder you might find in a car. Unfortunately, they are not attached to all recycling bins, and there are none near the copy machine workstation or office commons. So what do people do when they remove staples from paper? Ideally they should walk down the hall to a “black bin” to discard the staples. But that is not happening. Since regular trash bins at the workstation are now gone, the staples end up left on the workstation table.

An accumulation of discarded stables on workstation tables is an unintended consequence of the University’s low waste trash program.

Discarded staples are not a threat to world peace and they don’t contribute to climate change, but they are a nuisance and a minor hazard. They are sharp and are not sanitary, so getting inadvertently poked by one could become a painful problem.

There are always consequences for changes in policies, programs and incentives. While we can be careful and thoughtful in considering “all” the ramifications of changes we make to rules and policies, there will often be unintended consequences, both good and bad.

A working paper by National Bureau of Economic Research scholars identifies an interesting but negative unintended consequence to a program designed to promote greater school attendance. In their report, the authors described an experiment in which students were rewarded for meeting an attendance threshold at school. The reward produced the expected results. The reward increased school attendance. However, when the reward program ended, there were unintended effects. As stated by the authors:

Among students with high baseline attendance, the incentive had no effect on attendance after it was discontinued, and test scores were unaffected. Among students with low baseline attendance, the incentive lowered post-incentive attendance, and test scores decreased. For these students, the incentive was also associated with lower interest in school material and lower optimism and confidence about their ability. This suggests incentives might have unintended long-term consequences for the very students they are designed to help the most.

So the introduction of an incentive had the unintended consequence of driving out or reducing intrinsic motivation, a topic I have studied (here).

While we may not be able to anticipate all unintended consequences — that’s why they are “unintended” — we can probably do better than we are. And when we identify them, and if the consequences are significant enough, then we should consider revisions to the programs and policies we have implemented. I don’t know if the University of Missouri will be changing its “low waste program” anytime soon, but it would sure be nice to have a more convenient way of discarding used staples.

The hope and optimism of Thomas Malthus

My graduate research seminar today focused on how to review what other scholars have written. I took the opportunity to discuss with my students the writings of Thomas Malthus.

Malthus was a 19th century British scholar. Although trained as a minister, he spent most of his career as an academic. His most famous treatise is An Essay on the Principle of Population, which he published in 1798. In the essay, Malthus explained that because food production increases arithmetically while population grows geometrically, if left unchecked population growth would exceed available food supplies, resulting in famines, riots and other forms of human misery. “Famine seems to be the last, the most dreadful resource of nature,” he wrote.

It is tempting to think of Malthus as a pessimist, since commentators (called “neo-Malthusians”) often invoke him when describing a situation of potential doom and disaster, especially when referring to the environment, population growth and our ability to feed ourselves.

Prominent economists ride this bandwagon. For example, in a speech delivered to agricultural economists a few years ago, Thomas Hertel, a Purdue University economist, said this: “[The] degradation of existing crop land, when combined with the seemingly inexorable growth in demand for food, fiber and fuel has led many observers to suggest that the world may run out of land. Malthus (1888) is perhaps the best known champion of this position.” Other notable examples of economists promoting the pessimistic view of Malthus include Amartya Sen, who frequently used the term “Malthusian pessimism” (e.g., here) and Paul Samuelson, who referred to “the pessimism of Malthus” (here). Both of these scholars won the Nobel Memorial Prize in Economic Sciences. Robert Heilbroner, in his influential classic, The Worldly Philosophers, described Malthus as “the first professional economist” and his ideas as “profoundly disturbing” and “gloomy”. Malthus’s name is also used as an adjective, as in “Malthusian stagnation” and “Malthusian trap.” Sen’s reference to “Malthusian pessimism” would thus seem redundant.

Not all scholars share the prevailing view of Malthus, however. Some have argued that it is wrong to conceive of Malthus as a pessimist (e.g., Laurence Moss). I agree with this assessment, because I’ve read his essay.

In the final two chapters of the essay, Malthus provides an explanation for why it is “natural” for population to grow faster than food supplies. Because Malthus was a trained theologian, he sought a religious explanation for his observations about population and food. According to Malthus, the world is this way because that is how God made it, and God had a good reason for doing so. As Malthus claimed:

The necessity of food for the support of life gives rise, probably, to a greater quantity of exertion than any other want, bodily or mental. The Supreme Being has ordained that the earth shall not produce food in great quantities till much preparatory labour and ingenuity has been exercised upon its surface. … The processes of ploughing and clearing the ground, of collecting and sowing seeds, are not surely for the assistance of God in his creation, but are made previously necessary to the enjoyment of the blessings of life, in order to rouse man into action, and form his mind to reason.

In other words, God created a world that would not “naturally” grow food in abundance, because if He did, humans would become lazy, and laziness will not impel humans to better themselves. In order to acquire food, humans would have to work for it, and this struggle is one of the “blessings of life.” Malthus used the word “ordained” to emphasize the idea that it could have been otherwise, but God wanted it the way it is. In the paragraph following the statement quoted above, Malthus said that although there is “much partial evil” in a world where famines are possible and actually occur, a system where humans had to exert themselves in order to eat produces an “overbalance of good.” To drive this point further, Malthus said that we should

consider man as he really is, inert, sluggish, and averse from labour, unless compelled by necessity …; we may pronounce, with certainty, that the world would not have been peopled, but for the superiority of the power of population to the means of subsistence. Strong and constantly operative as this stimulus is on man to urge him to the cultivation of the earth, if we still see that cultivation proceeds very slowly, we may fairly conclude that a less stimulus would have been insufficient. Even under the operation of this constant excitement, savages will inhabit countries of the greatest natural fertility for a long period before they betake themselves to pasturage or agriculture. Had population and food increased in the same ratio, it is probable that man might never have emerged from the savage state.

So humans need to work the land to acquire their food and to feed their growing populations. In saying this, Malthus was not suggesting that we should expect life to be one of drudgery. Rather, he believed that it is entirely possible for humans to overcome want and necessity. Far from thinking that food shortages, famine, and human misery were inevitable, as a pessimist does, Malthus offered an alternative vision. The key is this phrase, taken from the first of the two major quotes given above: “till much preparatory labour and ingenuity has been exercised upon [the earth’s] surface.” When people labor and use their minds and hands intelligently, they can produce enough food to feed themselves and others. It seems to me that this also suggests that we give some consideration to agricultural practices that are long-term sustainable. We need to feed people in the future as well as today, and that requires that we think carefully and work diligently to do so.

I hope to see fewer references to the “pessimism” of Malthus. But our misunderstanding of Malthus reflects a more serious problem. The reason people misunderstand Malthus is because they don’t read what he said but rather accept without question what others have said about him. Doomsdayers, pessimists and others with an agenda have misrepresented Malthus for their own ends. Because people generally don’t read the classics anymore, the misperceptions of Malthus have evolved so far and lasted for so long that the name “Malthus” has become synonymous with pessimism. How unfortunate this is. If people read Malthus, then they might come to the conclusion, as I did, that Malthus was not a pessimist and that his observation about the relationship between population growth and the “natural” growth of food supplies was offered so that he could encourage people and governments to be thoughtful and innovative in their governing of resources and societies.

We have a lot of misperceptions and misunderstandings about things—the world, societies, science, technology, religion and the environment. This needs to change. Misperception and misunderstanding will persist until two things occur. The first is for people to think, work, study and labor with a genuine desire to seek out truth wherever it resides. The second is for people to be courageous enough to accept that truth, even when it means that they might have to change what they believe and do.

I’ll end this post with a bit of Malthusian hopefulness and optimism, using the final words of Malthus’s essay. He begins with a quote from the poet Alexander Pope and then gives words of encouragement.

“Hope springs eternal in the Human breast, Man never is, but always to be blest.”

Evil exists in the world not to create despair but activity. We are not patiently to submit to it, but to exert ourselves to avoid it. It is not only the interest but the duty of every individual to use his utmost efforts to remove evil from himself and from as large a circle as he can influence, and the more he exercises himself in this duty, the more wisely he directs his efforts, and the more successful these efforts are; the more he will probably improve and exalt his own mind, and the more completely does he appear to fulfil the will of his Creator.

So let’s get to work, folks. We have a lot of good to do in the world.

Is it better to be smart or hard-working?

In preparing for an assignment to speak in church tomorrow on the assigned topic of “work”, I recalled a summary of research I read last year about “the secret to raising smart kids.” According to Stanford University psychologist Carol Dweck, we should praise effort rather than intelligence. In other words, when a child does well, it is better to say something like, “you must have worked really hard” rather than “you must be very smart.”

There are two views of intelligence. One is that intelligence is fixed. The other is that intelligence can improve with effort. When a child with a “fixed” mind-set solves a problem, then they are likely to attribute their success to their intelligence. Conversely, when the child cannot solve a problem, then they tend to get discouraged, give up and attribute their failure to their lack of intelligence and skill. In contrast, when a child has a “growth” or “mastery-oriented” mind-set, then they will learn that with effort they can solve any problem. Thus, difficult problems that might discourage a “fixed” mind-set child become challenges and opportunities for “mastery-oriented” children.

Interestingly, praising a child for their intelligence fosters the “fixed” mind-set, while praising a child for their hard work promotes a “mastery-oriented” mind-set. According to the article (linked above), “Parents and teachers can engender a growth mind-set in children by praising them for their persistence or strategies (rather than for their intelligence), by telling success stories that emphasize hard work and love of learning, and by teaching them about the brain as a learning machine.”

So, is it better to be smart or hard-working? I don’t know. But if you’re going to compliment me, maybe you should acknowledge my effort rather than my intelligence.