Doing bad when I think I’m good

A perplexing question in social science research is why people behave in ways inconsistent with their beliefs and their perceptions about themselves. For example, if we know it is wrong to lie, cheat or steal, then why do people lie, cheat or steal? Economists might say people conduct a rational analysis to assess the benefits of lying, cheating or stealing relative to the costs of getting caught or having a guilty conscience and will behave inappropriately when the benefits of doing so outweigh the costs. Psychologists might look to the internalized norms and values of people and say they will lie, cheat or steal when their internal value systems become corrupted. But what if people maintain a strong internal value system but still lie, cheat or steal? Is it possible for me to behave dishonestly and still consider myself an honest person? The question is not trivial. Consider these variations:

I see myself as a person dedicated to healthy eating and exercise but who routinely (over)indulges in sugary and unhealthy foods.

I see myself as a person who values education and a growing intellect but who routinely watches too much television or plays too many games on a smartphone or tablet.

I see myself as a person who is fair and impartial but who regularly denigrates the statements of persons whose political views differ from mine.

I see myself as a person who treats others with dignity and respect but who often hurls insults at political opponents because its just “politics”.

I see myself as a religious person but who rarely attends church or reads scriptures and prays.

I see myself as a competent and careful blogger but who infrequently adds new posts to his blog or reads and comments on the blog postings of others.

A study published in 2008, entitled The Dishonesty of Honest People: A Theory of Self-Concept Maintenance, provides a compelling insight here. According to the authors of the study, people have and want to maintain a particular image of themselves, such as being a person of honesty. A problem arises when people face a decision that can produce a short-term gain but require them to act in a way that is contrary to their self-image or self-concept. When people are torn by competing motivations–“gaining from cheating versus maintaining a positive self-concept as honest”–they will solve this dilemma “by finding a balance or equilibrium between the two motivating forces, such that they derive some financial benefit from behaving dishonestly but still maintain their positive self-concept in terms of being honest.” But how? The trick is to define the behavior in a way that still allows them to maintain the desired self-concept. The authors describe this as malleability. The more malleable the situation, the more likely people will behave inappropriately while still maintaining a positive self-concept. Consider this variation of an example provided by the authors: I might be able to justify taking a $1 notebook from my friend, even if I cannot justify stealing $1 from his wallet to buy the notebook myself. The malleability here comes from my defining this action as “borrowing” rather than stealing, or thinking that because I let my friend use something of mine previously, then my taking the notebook is okay because “this is what friends do.” Of course, there is limit to this rationalization. I might be able to rationalize taking the $1 notebook but probably not taking my friend’s $20,000 car. Thus, malleability and limits set the boundaries within which rationalization occurs.

The scholars conducted experiments to see how people behave when given opportunities to cheat and to redefine how they see themselves. The experiments confirmed their expectations. As summarized by the authors, “people who think highly of themselves in terms of honesty make use of various mechanisms that allow them to engage in a limited amount of dishonesty while retaining positive views of themselves. In other words, there is a band of acceptable dishonesty that is limited by internal reward considerations.” In other words, I can lie as long as I can convince myself it is really not lying. If I can do this easily, then good for me. I get my lie and self-worth too. If I cannot do this easily, then I’ll resign myself to being honest.

So, if we want to reduce dishonesty in society, we need to limit the malleability of contexts in which people might lie, cheat or steal. In other words, we need to make it harder for people to rationalize their unethical behavior that allows them to maintain a positive self-concept even though they are doing wrong. In their study, the authors were able to do this by asking the subjects of their experiments to write down as many of the Ten Commandments as they could remember. Perhaps this means we should be promoting greater religious observance in society.

Lying is still lying, regardless of what we want to call it. Cheating is still cheating. And stealing is still stealing. All our wrong. We need to call it what it is.

Phew! That was a lot of work creating this post. Time for this healthy exerciser to take a chocolate break.



Morality and neurochemical impulses

Recently I was reminded of a book I read a while ago by philosopher Patricia Churchland entitled Braintrust: What Neuroscience Tells Us about Morality. (A brief video of her explaining the book is here.) The book attempts to explain what scientists have learned about the brain in order to explain how it is that humans developed a sense of morality. One interesting idea she discusses is that the hormone Oxytocin is found in the brain and in the body. It has been shown to promote caring behavior in animals, and it is released during pregnancy, triggering “full maternal behavior” in humans and animals. Oxytocin also promotes trust in humans by “raising the threshold for tolerance of others, and to its down-regulation of fear and avoidance responses,” as demonstrated in experiments in which some research subjects are given a dose of Oxytocin and are asked to play games and interact with others in order to measure trusting behavior. Another interesting discussion is that, at the genetic level, behavior is complex. No single gene can be associated with any unique or specific behavior. In the “Parable of the Aggressive Fruit Fly,” Churchland explains how scientists are able to breed a fruit fly that is 30 times more aggressive than their natural cousins, but genetic differences between them are minor and do not seem to be related to any specific behavior. Rather, differences are in mundane physiological functions.

After discussing these ideas Churchland enters into a discussion of why various philosophers have not really gotten it right about morality and ends with a criticism of religion, or what she calls a “supernatural basis” as the source of morality. She denies the need to rely on God or religion in order to explain morality and how people come to know that something is right or wrong, focusing instead on a neurobiological basis for these. To this end she is particularly critical of religious tenets that imply or state an absolute standard of behavior or morality, such as claims about what someone “ought” to do or be. She focuses especially on the Golden Rule, the Ten Commandments and a God-given conscience. One reason she gives is that religious “absolutes” are just that—prescriptions that are intolerant of specific contexts. Another reason she gives is that absolute standards are invalidated because of the allowance of exceptions, such as when the Lord tells Moses “Thou shalt not kill” (see Exodus 20) and then later commands him to slay Israelites who worshipped false Gods (see Numbers 25). I note the inconsistency in these two objections. She is critical of religious intolerance as well as its tolerance. She also complains that people “with conscience” often advocate conflicting ideals. For example, some people feel it is wrong to eat meat while others feel it is morally acceptable. According to her, this means religion cannot be used to justify claims about morality.

I find her argument highly unsatisfying. If she is correct, then where does this leave us? A world in which morality is relative and where morality is created and defined by neurochemical reactions in our brains? If we live in such a world, then how is it that humans are able to make decisions of right and wrong and come to a consensus about many moral issue? Neurochemicals might explain in part feelings of affection we have for others, but that only accounts for the sociality of humans and animals. It is too far a leap to claim that it also accounts for the ability of humans to engage in complex moral analysis or to make and act on specific moral judgments. It also cannot explain how or why little children understand basics of right or wrong. If you ask a five year old child if it is a good thing or a bad thing to take a toy away from another child or to hit another person, they usually get the right answer (it is a wrong thing). Children have an innate sense of right and wrong that can only be described as a conscience. Neurobiological responses are too primitive to explain this ability of children. To accept Churchland’s view is to equate morality with sociality, and that is clearly insufficient for explaining actual moral judgment.

A stable society requires that humans accept a common morality and sense about what is right or wrong and that they are willing and able to police themselves by exercising moral restraint. This requires a belief or a willingness to believe that there is such a thing as an absolute standard of morality. History has shown repeatedly the horror that humans inflict on others when they disagree on fundamental moral issues and beliefs and adopt a mindset of relativism and situational ethics. The Nazi holocaust comes to mind. (Side note: I just finished Miklos Nyiszli’s book, Auschwitz: A Doctor’s Eyewitness Account, which provides a stunning account of a Jewish doctor who helped the infamous Josef Mengele conduct experiments on prisoners in the concentration camp.)

Personally, I would rather live in a world in which people accepted the reality of a Divine Being and followed His dictates than one in which people acted only according to neurobiological and chemical impulses. It is because people ignore their God-given conscience that immoral behavior and human-on-human atrocities occur.

Prisoner’s Dilemma and presidential campaigns

I introduced my microeconomics class today to game theory. Doing so gave me an opportunity to explain why US presidential campaigns are filled with so much hateful and ugly rhetoric. Why can’t politicians be nicer, speak to the issues, and avoid the hurling of mud at their political opponents? Why do we see so many negative campaign adds? Game theory, particularly the Prisoner’s Dilemma, provides insight here. In a previous post I described briefly what the Prisoner’s Dilemma is.


Consider this figure, which depicts the campaign strategies of Donald Trump (“Trump”) and Hillary Clinton (“Hillary”). Trump and Hillary can be nice or mean. If both are nice and avoid negative campaigning, they each split the Electoral College (EC) votes, with one getting a few more than the other for a win. The same outcome occurs if both play mean and nasty and spew hateful rhetoric at the other, but now the tone of the campaign is harsh and leaves a bitter taste in everyone’s mouth. With 270 EC votes needed to win, as of this writing Trump was declared the winner with 279 EC votes. Thus they split the EC with Trump getting a few more but doing so with a very negative campaign–an inferior outcome for everyone.

If Trump is nice and takes the high road but Hillary is mean, she will get most of the EC votes. Similarly, if Hillary takes the high road while Trump is mean, he will win most of the EC. In other words, negative campaigning works, which is why both have an incentive to campaign negatively. That is, both Trump and Hillary have a dominant strategy to sling mud. Regardless of whether Hillary is nice or mean, Trump is better off being mean and campaigning negatively–when Hillary is nice, for Trump getting most of the EC by being mean is better than getting about half by being nice, and when Hillary is mean, getting about half of the EC by being mean is better than getting only a few EC votes by being nice. Similarly, regardless of whether Trump is nice or mean, Hillary is better off being mean and campaigning negatively. This produces a classic Prisoner’s Dilemma outcome.

Most people will prefer that the candidates remain nice and civil during the campaign. For example, the Pew Research Center said this (here) about this year’s presidential campaign: “The presidential campaign is widely viewed as excessively negative and not focused on important issues. Just 27% of Americans say the campaign is “focused on important policy debates,” which is seven points lower than in December, before the primaries began.” Interestingly, a 2000 Gallup survey found that “negative campaigning [is] disliked by most Americans” and that most people felt that the presidential contest between Al Gore and George W. Bush “may be one of the most negative presidential elections in recent history.” Maybe every presidential contest is the worst one in history.

But since the game candidates play is a Prisoner’s Dilemma, the expected and actual outcome is one in which both are mean and nasty.

How do we resolve the Prisoner’s Dilemma in this case? Standard solutions that scholars have examined, such as repetition and institutional rules promoting cooperation and punishing defection, can’t apply or won’t work in political campaigns. The only seemingly viable option is for players of the Prisoner’s Dilemma to have high moral values so that they avoid the incentives to be mean to each other. If both players of this game are virtuous and possess high integrity, and each knows the other player is that way, then maybe we can see political campaigns and elections that are civil and informative.

Prisoner’s Dilemma in the classroom

The Prisoner’s Dilemma is a model that illustrates a conflict between the interests of individuals and the interests of those individuals as members of a collective or group. In most versions of the game, two or more persons can cooperate and receive a collective reward that is greater than the sum of individual rewards they could earn if they choose not to cooperate. The incentives of the game are such that the persons have an individual incentive not to cooperate, thus making them collectively worse off had they chosen to overlook their individual interests and instead think as a group. The game is famous in economics and other social sciences. Wikipedia has a lengthy discussion of the game, its refinements and implications here.

Even though the Prisoner’s Dilemma has been around for decades it is still a fun game to play with students. In my microeconomics class today I offered the following opportunity for the class to earn extra credit:

“You can earn extra credit by selecting the amount of extra credit points you want. However, if more than 4 of you select option B, then the entire class will receive 0 extra credit points.”

Option A was to earn 1 point extra credit.
Option B was to earn 4 points extra credit.

I use a web-based student response system so that students could register their choice on their cell phones and I would see the results immediately. Not surprisingly, of the 180 in class today, 10 chose option B, leading to no extra credit for the class. When I gave the class a chance to do it over again and even talk to each other, the number who chose option B increased to 17.

The incentives to choose option B are pretty strong here — getting 3 more extra credit points than one could get by cooperating with everyone else in the class and getting just 1 point. Even when I changed the payout structure so that option A gave 3 points and option B 4 points, there were 6 students who still chose option B, thus negating the extra credit opportunity for everyone.

What I find interesting here is not that there were some students who chose option B but that so many in the class chose option A. At least 90 percent of students were willing to forgo their individual interest of choosing option B in order to cooperate for the collective good.

In economics we teach that when people pursue their self-interest things will work out the best for everyone. But sometimes they don’t. Sometimes the pursuit of one’s interests can be damaging to others and the collective whole. Why does self-interest work in some cases but not in others? And when the incentives for collective action are not ideal, what can we do to encourage or promote more cooperative thinking and behavior?

Russell Crowe, in the movie A Beautiful Mind, played the mathematician John Nash who developed this idea. He explains the problem and solution nicely in this clip from the movie.

I asked my class these questions and got a lot of interesting responses. Because the student response system I use saves student responses, I can list some of them here:

“Anonymity is the problem”

“People only act in their self interest and don’t want to work as a whole for the better of everyone”

“so basically we need to be communists in order for this game to work”

“People are greedy”

“people think they deserve it more than others”

“Throw tomatos (sic) at the people who chose B”

“you do what you have to do”

“Take away the second option”

“build a wall make the people who picked B pay for it”

“this game don’t work cause we got more than 4 selfish people in class”

“Not as many laws and restrictions”

“Because people think that everyone else will pick A and that they will end up getting more when in reality they hurt everyone else”

“Need more communication and honesty”

“Sometimes selflessness is the answer”

“If people weren’t greedy then we would at least be able to get one point extra credit”

“All it takes is one bad egg to ruin it for everyone”

“punish those who answered B”

“Communicate with others to achieve extra credit”

“Put people who choose B in jail”

“freshmen think that 1 point if extra credit is actually going to influence their grade”

“do your work maybe you wouldn’t need to pick B”

Resolving the Prisoner’s Dilemma requires careful structuring of the way people interact and enforcement of the formal rules and informal norms we develop to promote cooperation. It also requires that people exercise self-restraint in the pursuit of their self-interest, since no rules or monitoring mechanisms are perfect. We wouldn’t (or shouldn’t) want to live in a society where such rules are perfectly enforceable. How to do this so as to protrct one’s freedom to choose makes for a fun discussion in class.

In the end I gave everyone in the class who chose option A in the last round of the game (in which 3 points were possible) the 3 points extra credit. I don’t know if the class learned much, but I hope they left feeling better about their teacher.



Nobel Prize in Economic Sciences and ethical insights from contracting theory

kva_logo_09The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2016, awarded by the Royal Swedish Academy, was given to Oliver Hart, a professor at Harvard University, and Bengt Holmström, who is at the Massachusetts Institute of Technology. They were awarded the prize “for their contributions to contract theory.” While both made significant and award-worthy contributions that changed the way we think about contracting and the organization of firms, their insights also have ethical implications.

Hart’s work is based on the idea that contracts are inherently incomplete. That is, it is not possible to design a contract that specifies outcomes for all possible contingencies and states of the world, since there will be some (or perhaps many) situations that are unforeseeable or so complicated that they cannot be reasonably accounted for in contracts. In other words, all contracts contain gaps or missing provisions. Because contracts are incomplete, someone has to be charged with making decisions and reaping the consequences of those decisions (whether good or bad) in those cases not dictated by contract. This is another way of saying that someone has to have authority. Hart’s theories help us understand what authority means, particularly in terms of ownership and decision-making rights.

Holmström’s work focuses on the idea that information is often hidden or not available to everyone. This is, it is not possible for everyone to have all relevant information when they are making important decisions or for bosses to know perfectly what each of their employees is doing in the firm. The implication is that people will have to rely on others for information or, more generally, to do things on their behalf. One person relying on another can create what is known as the Principal-Agent Problem. An example is when employers have to rely on employees for reports on how the business is operating or to take actions on behalf of the firm, such as negotiate with customers. Holmström’s theories provide insights into how contracts should be structured so as to provide the necessary incentives for people to provide accurate information or to act in the interest of their employer or the organization to which they belong.

Ethical problems arise when there is a conflict of interests, values or rights. Interests are things we care about; values are ends in themselves or ideals to which we aspire; and rights are entitlements to things we obtain or do. Hart and Holmström’s ideas provide clarity about where and why these conflicts can arise in the business world. For example, the rights of firm owners are in conflict with the interests of workers; firm owners want workers to work hard and generate increased profits for the owners, but doing so is costly for workers, who have an interest in relaxing and taking long lunch breaks. Employees have an interest in receiving higher wages and better benefits, but these conflict with the interests of employers who pay for them. Interestingly, Holmström also demonstrated that there is a conflict between the interests of firm owners to maximize profits and their interest in running the firm efficiently. In other words, it is not possible for a firm owner to provide an efficient incentive system for workers and at the same time generate the highest profits possible. To have one the owner must give up on the other.

Contracting theory that builds on Hart and Holmström’s work is based on the assumption that people will try to take advantage of their situation. In other words, people will try to lie, cheat and steal unless they have an incentives not to, which incentives are governed by contracts. This raises the question of whether contracts would be necessary if everyone were perfectly ethical, honest and forthright. It’s an interesting academic question only, because we don’t live in a world where everyone is perfectly ethical, honest and forthright. Yet. Until then, we have to use the best theories for organizing ourselves. Fortunately, Hart and Holmström have given us a solid foundation on which to do this.

Business leadership and the making and punishing of unethical employees

A study published in the current issue of Business Ethics Quarterly links ethical leadership with improved engagement of employees at work, greater employee voice and lower intentions for employees to exit. In other works, when employees perceive or know their leaders to be ethical, they are more likely to feel good about being at work, more willing to communicate their opinions, recommendation, concerns or ideas to their supervisors, and less likely to leave or intend to the leave the business.

In this context, an ethical leader is someone who is a moral person and who models high moral standards at work. The specific indicators of ethical leadership used in the BEQ paper draw from research by scholars at Pennsylvania State University. If valid, the indicators are informative. There are 10 of them. Ethical leaders

  • conduct their personal lives in an ethical manner
  • make fair and balanced decisions
  • can be trusted
  • ask what the right is when making decisions
  • listen to their employees
  • discuss business ethics and values with their employees
  • have the best interest of their employees in mind
  • set an example of behaving ethically at work
  • discipline employees who violate ethical standards
  • define success by the way results are obtained in addition to results.

I would add one more item to the list. When designing and implementing performance measures and incentives, ethical leaders are careful to ensure that they are promoting incentives rather than pressures to perform. The line between incentive and pressure can be thin. Leaders who are not careful may find that their efforts to motivate workers create pressures for them to lie, cheat or steal.

The CEO of Wells Fargo is learning this lesson the hard way. According to the Wall Street Journal’s report of John Stumpf’s testimony during a Senate Banking Committee hearing yesterday (September 21), the Bank is accused “of fostering a culture where low-paid branch employees were pressured to meet impossible sales quotas to keep their jobs, and so signed up customers for products without their knowledge.” Pressure does not create an environment where employees behave ethically. Even well-meaning employees may find the temptation to fudge numbers or behave inappropriately too strong in such an environment. The Bank reported that it fired more than 5,000 employees for wrongdoing.

So, Wells Fargo created unethical employees and then punished them.

Reminds me of the statement by Thomas More in his book, Utopia, made famous by Drew Barrymore’s character Danielle (aka Cinderella) in the movie Ever After. Danielle is arguing with Henry, the Prince of France, for the release of her servant, who is bound with other poor and destitute prisoners for the America’s. Here is the exchange:

Danielle: A servant is not a thief, your Highness, and those who are cannot help themselves.

Henry: Really! Well then by all means, enlighten us.

Danielle (quoting More): If you suffer your people to be ill-educated, and their manners corrupted from infancy, and then punish them for those crimes to which their first education disposed them, what else is to be concluded, sire, but that you first make thieves and then punish them?

Henry: Well, there you have it. Release him.

That’s quite a commentary about one of the nation’s most prominent banks.