Should we scrutinize public research funded by private interests?

An article entitled “Public science for private interests: How MU agricultural research cultivates profits for industry” in a local newspaper examines the link between research conducted by plants scientists at the University of Missouri and funding they receive from private businesses. Corporations provide more than 14% of external funding to plant scientists, which is substantially more than any other departments on campus receive, with the exception of the school of medicine.

The article raises concerns about “a culture of industry influence” that can create a conflict of interest when private businesses give money to university scientists to do research. As stated in the article, “Agricultural companies build relationships with professors and extension personnel so they can learn what farmers need — and thus drive sales. Even some MU professors who take the money point out that companies give in order to benefit their own bottom line. Critics say such funding cultivates a culture of influence that tends to bend research toward company goals.” Because these public-private funding relationships have not received substantial scrutiny, the implication is that more oversight is needed.

The article also makes this point: “What’s not so clear is how the public benefits.”

I suppose if we think that business and private enterprise are not part of the “public” realm– that is, society — then the concern might be relevant. But if private business benefits, then doesn’t that count as a benefit to the public as well? Stated differently, why do we assume that a private benefit automatically creates a negative externality on others?

There are two general reasons why we might be concerned about private industry funding research at public universities. The first is offering money for research might bias or unduly influence the research and conclusions reached by researchers. The second is that offering money for research might crowd out other and perhaps more important research. Together these illustrate the nature of the conflict of interest.

Does offering money to researcher to conduct research bias their perspectives? Of course it does. Anytime a gift is given, a reciprocal obligation of indebtedness is created, regardless of whether individuals are conscious of the fact or not. Bribery is an example, but even small gifts “carry the risk of subtly biasing—or being perceived to bias—professional judgment” of people, which is why the American Medical Association has strict rules on gifts physicians can receive.

But why is the concern about bias creation limited only to money given by corporations to university scientists? Money certainly creates biases, but so does power, relationships, intellect, social status. I am just as concerned about the bias bureaucratic power — the ability of a government bureaucrat to affect the lives of people — creates as I am about the bias created by funding provided to university researchers by private industry.

University researchers face all kinds of biases. Some are related to their sources of funding, but others are related to their own perspectives and views of the world, and how they want to the world to be. For example, are university professors politically neutral? If not, then shouldn’t we expect one’s political opinions will affect their research? The extent that private industry funding creates biases should be balanced with the recognition that biases already exist. One bias is simply replacing another.

(This reminds me of an episode of the TV show Law and Order from season 8 entitled “Under the influence.” The district attorney, Jack McCoy, is prosecuting an intoxicated driver who killed three people. Realizing the judge presiding over the case has strong feelings about drunk drivers, McCoy’s assistant, Jamie Ross, wants to inform the defense attorney of the judge’s bias. McCoy tells her, “I’m not gonna tell him. Come on, Jamie, a judge with an agenda, this is news to you?”)

I think all sources of bias and influence in academia need to be identified and acknowledged, not just those created by funding offered by private industry.

Does offering money to researchers crowd out other potential research? Of course it does. If a plant scientist is conducting research funded by and benefiting Monsanto, then he or she will not have time to do research that might lead to other breakthroughs. Simply stated, there are opportunity costs in doing university research. If I spend time on research project A then I don’t have time to work on project B.

Presumably, the concern about private industry funding is that it crowds out research that scholars might otherwise be doing that can benefit a greater good, such as the environment, the disadvantaged, or others in society, not just profit-seeking businesses. But is private industry funding any different than grants or contracts offered to university researchers from other sources? Any source of funding can crowd out alternative research opportunities. For example, suppose the USDA gives a $300,000 grant to agricultural economists to forecast crop yields for the next 5 or 10 years. Wouldn’t this crowd out research on other agricultural economic problems the scholars could be doing? What about funding provided by private foundations, such as the Bill & Melinda Gates Foundation, which gives millions of dollars to universities (as well as other organizations)? Scholars receiving Gates Foundation funding usually celebrate rather than downplay that fact.

Since any funding crowds out research, why are we so concerned about university research that benefits private, profit-seeking businesses? Universities already engage in activities that benefit private enterprise, such as training students to be competent employees. Many universities even tout how successful they are in placing graduates in employment. In my department, faculty research on entrepreneurship, business strategy, economic development, value chain analysis, contracting, and other issues that benefit businesses. What makes research funding by and benefiting business such a concern that it deserves special scrutiny?

NIB_datacheck_DRUPALThe fact is, funding from government for basic research at universities has stagnated in recent years. Currently, as reported in a Science magazine story, the U.S. government share of basic research funding has fallen below 50%. If we value university research (we should), then perhaps we should be more supportive of increased funding from other sources, including private industry, even if it is not ideal or promoting research others find useful.

I think any research crowding out should be recognized, not just that created by private industry. If society wants or values research on a topic not of interest to private industry, then funding should be offered to faculty to do that research.

Should we scrutinize research at public universities funded by private interests? Yes, but not simply because the source of funding is from private industry.

No private industry funding was used in support of this post.

The price of honesty

I recently came across a 2013 paper entitled “The value of honesty: empirical estimates from the case of the missing children.” The paper seeks to answer the question of how much people are willing to “pay” to be honest. That is, if a person is given the chance to cheat at relatively low risk of being caught, but does not take that chance, then the amount that the person could have received by cheating is the price they pay for being honest.

The researchers looked at the effect of a change in the U.S. tax law in 1987. Prior to 1987, tax filers could report a dependent, and thus take a tax deduction, by writing the name of the child on the tax form. Beginning in 1987, however, filers had to provide a Social Security number for dependents. Not surprisingly, the number of children reported on tax returns in 1987, compared to previous years, declined dramatically. The scholars show that 20% of filers “lost”, or did not claim, dependents in 1987, compared to an average of 14% in previous years. Since some of these would be legitimate losses (e.g., when the child leaves home), the scholars estimated that “around 2.5% of taxpayers were cheating by improperly claiming dependents in 1986.” This suggests, however, that most filers were not cheating. According to the authors, “the 97.5 % of taxpayers who did not avail themselves of this opportunity to cheat implicitly demonstrated that they would rather give up several hundred dollars in income than cheat the government. Overall, we think this is striking evidence of a broad willingness to pay to be honest across the taxpayer base.”

A further review of the data suggested that “cheaters were much less likely to be married filing jointly, and much more likely to file as head of household” (i.e., be single). As a result, many had to change their filing status (that is, they probably didn’t have any children at all). Does this mean that marriage encourages honesty?

And the price of being honest? Based on taxes paid and the value of the forgone dependent, taxpayers who were honest gave up “an average of 7% of their tax bill.” Thus, while there appears to be a general willingness to be honest (at least in 1987), there was a monetary cost of doing so. Hopefully honest taxpayers also had a good feeling of doing the right thing. I wonder if that is a common sentiment today.


Correcting a misunderstanding of the Friedman Doctrine

The Business Roundtable is a non-profit organization consisting of chief executive officers of major U.S. companies. For years they have advocated a shareholder theory of the corporation that places the interests of stockholders over the interests of other business stakeholders, such as employees or communities. Recently, the organization issued a statement that “Redefines the Purpose of a Corporation to Promote ‘An Economy That Serves All Americans’.” According to the organization, “Each of our stakeholders is essential.”

Many commentators hailed this change in language, some even going so far as saying it repudiates a position advocated by Milton Friedman nearly 50 years ago. For example, University of Chicago law professor Eric Posner, writing in The Atlantic, simply declared, “Milton Friedman was wrong.”

Portrait_of_Milton_FriedmanIn an essay published in 1970 in the New York Times Magazine, Friedman wrote that “the social responsibility of business is to increase its profits,” which has become known as the Friedman Doctrine. This is what Friedman said:

“In a free-enterprise, private property system, a corporate executive is an employee of the owners of the business. He has a direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible while conforming to the basic rules of the society, both those embodied in law and those embodied in ethical custom.”

Unfortunately, both business executives and critics of Friedman misrepresent his argument, suggesting that he advocates maximizing profits at any cost. For example, in his Atlantic essay, Posner writes “Friedman argued that because the CEO is an ’employee’ of the shareholders, he or she must act in their interest, which is to give them the highest return possible.” It’s the period after the word “possible” that is problematic. Placing a period before the qualification Friedman added implies he does not support any constraints on the profit-making activities of businesses, which Friedman never did. Simply stated, it is not true that Friedman’s essay “seemed to absolve corporations of difficult moral choices and to protect them from public criticism as long as they made profits,” as Posner writes. In contrast, business executives must consider the moral choices of their decisions.

First, Friedman said it’s “generally” appropriate to make as much money as possible, not absolutely required. Second, he adds the necessity of “conforming to the basic rules of the society, both those embodied in law and those embodied in ethical custom.” This second condition imposes considerable restraints on profit-making activities.

In an essay entitled “Smith, Friedman, and Self-interest in Ethical Society” published in the Business Ethics Quarterly, my colleague Farhad Rassekh and I complained about scholars and other writers who misrepresent Milton Friedman as well as Adam Smith. We wrote: “It is common in many business ethics textbooks to find Smith and Friedman interpreted as follows: People should pursue their self-interests; businesses should do whatever improves their financial position, even if others are harmed; and in some way the ‘invisible hand’ ultimately makes the effects of such actions right for society. Is this interpretation correct?” Our answer was a simple “no.”

Prior to publishing our paper, my co-author sent a draft of the paper to Friedman, who replied, “As you recognize, I have been very unhappy about some of the interpretations that have been placed on my position.”

After studying Friedman’s writings, Rassekh and I concluded the following:

“Although Friedman argues that business executives should focus on profit maximization, he does not condone all behaviors that increase financial returns. Quite explicitly, he places four restrictions on profit seeking: Business people must obey the law, follow ethical customs, commit no deception or fraud, and engage in open and free competition. The last restriction means political rent-seeking and anti-competitive behavior in any form must be avoided. For Friedman, social responsibility means pursuing one’s interests (such as making a profit) without adversely interfering with the freedom of others, so that everyone can freely enter into agreements ‘with their eyes open.'”

There are many things businesses have done in the interest of maximizing profits that Friedman would never have condoned because they violate ethical requirements or other conditions he places on businesses, such as mistreating workers, discharging harmful pollutants into the environment, withholding information about dangers created by their products, and abandoning communities in order to produce in lower-cost countries.

I wonder what would have happened if businesses in fact followed the Friedman Doctrine as Friedman actually declared it, and if commentators accurately represented Friedman’s position on the issue. I suspect the whole debate about stockholders versus stakeholders would have ended years ago and anything the Business Roundtable said today about the issue would have been a nonevent.


Unequal wages and claims of unfairness

A recent story on NPR, “2-Tiered Wages Under Fire: Workers Challenge Unequal Pay For Equal Work,” describes how workers at a US plumbing manufacturer responded after learning that some workers were paid differently than others for doing similar work. Simply stated, those paid less than their peers were not happy. According to the NPR story, “[One worker] says the unequal wages caused friction in the [company’s] distribution center, where newer, lower-paid workers grumbled at being asked to perform the same tasks for less money.”


After the “Great Recession” in the late 2000s, many companies changed their hiring practices, so that workers hired after the Recession were paid less than workers hired before the Recession. Even after the Recession ended, the two-tier wage structure persisted.

Is it fair that workers doing the same work receive different wages, even after controlling for time on the job?

Business executives might argue the situation or practice is fair. If workers agree to work at a wage of, say, $10 an hour, then how can they say that is not fair? Workers willingly accepted the terms of the contract. If they don’t like those terms, then they can work somewhere else. Besides, isn’t it better to be employed, even at a lower wage, than to be unemployed?

This is the challenge of dealing with issues of fairness. Fairness perceptions can be very subjective. What is perceived as fair to you might be seen as unfair to me. So, subjective assessments can be very difficult to assess. Which side is right? I guess it just depends on which side you look at. Assessing unfairness claims is also difficult when people adopt different conceptual or theoretical frameworks, such as principles of procedural or distributive justice or theories of John Rawls or Robert Nozick, because different perspectives weight elements of the problem differently (e.g., something is fair if the rules are followed, or something is fair if the outcome to everyone is the same, or something is fair if the size of the reward is commensurate to the effort expended, etc).

Asking which side is right misses the problem. Just because one person makes a good case that something is fair doesn’t mean that the other side’s perspective is without merit. Is there a way to assess objectively the merit of unfairness claims, such as those made by workers paid less than their peers?

Mary Hendrickson and I, along with our colleagues, have been working on this problem for a while. We have developed an innovative approach for assessing claims of unfairness that does not require the a priori selection of a specific theory or conceptual framework. Instead of picking our favorite theorist or theoretical perspective, we focus on the expectations that individuals have. Expectations are important because claims of unfairness usually arise when expectations are violated. For example, if my student expects an A in the class but receives a C, then it would not be surprising to me if she complains. Similarly, if I get pulled over for speeding and expect to get off with a warning but instead get a ticket, I might claim, “That’s not fair!”

In a series of papers (“Power, Fairness and Constrained Choice in Agricultural Markets: A Synthesizing Framework” and “The Assessment of Fairness in Agricultural Markets“), we show that evaluating the reasonableness of expectations is a way of assessing claims of unfairness, because reasonableness can often be evaluated objectively. If a person’s expectations are reasonable and if the expectations are violated, then the resulting claim of unfairness has merit. Conversely, if a person’s expectations are not reasonable, then any claim of unfairness resulting from a belief that expectations were violated would not have merit. For example, suppose an outside observer asked my student who received the C why she expected an A, and suppose she said, “because I worked hard and attended class every day.” Suppose further that the outside observer reviewed my course syllabus, which clearly stated that a grade of A is only awarded to students earning a score of 90 percent or higher on all tests (but makes no reference to “effort” or “attendance”), then the outside observer would likely conclude that the student’s expectations were not reasonable and hence the resulting claim of unfairness was without merit.

Several conditions can provide a reasonable basis for expectations. The first is equal treatment of equals. In our work we refer to this as “structural equivalence”. People who are structurally equivalent to others, that is, who are in the same position and doing the same work, would reasonably expect to be treated the same. The second is based on the idea of time consistency. For example, someone who has received a year-end bonus for many years would reasonably expect a year-end bonus this year. The third is rights, which by definition determine expectations. If I have a right to vote, then it is reasonable for me to expect to be allowed to do so. We discuss other ideas in our papers as well.

Are the complaints by workers in the plumbing manufacturer about the unfairness of wages with or without merit? Stated differently, is it reasonable for workers to expect to be paid the same as their peers for doing similar work given the length of time they worked in the company? I think this is reasonable. Why wouldn’t it be? If two workers are doing the same work for the same amount of time but one is paid more than the other, then the person receiving the relatively lower wage would have a claim that the wage structure is not fair.

Apparently the plumbing manufacturer agreed, too. A worker strike and a tight labor market brought labor and management to the bargaining table, with the company agreeing to phase out the two-tier wage structure within the next five years. According to the article, it was leverage that convinced the company to pay fair wages. Workers didn’t have leverage during the recession, but they have it now.

As great as leverage and other economic incentives are in moving businesses and people to fairer outcomes, it would be nice if instead they did that simply because it was the right, or fair, thing to do.

Corruption, 2018

Transparency International, the non-governmental organization responsible for the Corruption Perceptions Index (CPI), has released its new findings for 2018 (here). The CPI “ranks 180 countries and territories by their perceived levels of public sector corruption according to experts and businesspeople, [using] a scale of 0 to 100, where 0 is highly corrupt and 100 is very clean.”


The top three spots are held by Denmark, New Zealand, Finland, Singapore, Sweden and Switzerland (there are ties at position #3). Compared to 2017, Denmark and New Zealand swapped places, while Singapore and Sweden bumped Norway out of the top three. Denmark’s score remained the same while New Zealand’s fell by 2 points.

The United States was ranked #16 in 2017, but it dropped to #22 in 2018, losing 4 points off its CPI score. Its score of 71 is the lowest since 2012; neighbors on the list are France, United Arab Emirates and Uruguay.

We often focus on countries at the bottom of the CPI, such as Sudan, North Korea, Yemen, South Sudan, Syria and Somalia. These countries deserve attention. Corruption in those countries is compounded by violence and political unrest.

But countries at the top of the CPI are not perfectly clean, either. An analysis by Transparency International, entitled “Trouble at the Top: Why High-scoring Countries Aren’t Corruption-Free,” describes cases of money-laundering, bribery and other cases of public and private malfeasance. The problem is that these countries are home to large multinational corporations that export many goods and service, and that “most of these countries are failing to investigate and punish companies when they are implicated in paying bribes overseas.”

It’s important that we promote transparency, rule of law, democratic processes and leaders of integrity. But it’s probably more important that we care. Vice thrives in an environment of indifference and distraction. Caring means we pay attention to reports of corruption and what goes on in the world, including our own backyards. Less important is what’s the latest show to binge watch on Netflix or what’s happening in our Facebook feed.

Who should control CAFOs?

In the U.S., a concentrated animal feeding operation (or CAFO) is a livestock farm where animals are “confined on site for more than 45 days during the year.” What makes such operations “large scale” depends on the type and number of animals — “1000 head of beef cattle, 700 dairy cows, 2500 swine weighing more than 55 lbs, 125 thousand broiler chickens, or 82 thousand laying hens.”

According to the U.S. Department of Agriculture’s (USDA’s) recent Census of Agriculture (here), there were 93.6 million cattle and calves residing on nearly 883,000 ranch farms in 2017. Approximately 1.25% of those farms were “large scale,” controlling more than 38% of all cattle and calves (in 2012, 1.15% of cattle farms were large scale). In the case of hogs, more than 93% of the 72.4 million hogs raised on farms in 2017 had at least 2,000 animals; that is, 12.5% of the 66,000 hogs farms in the U.S. were large (compared to 12.2% in 2012).


CAFOs, especially large ones, are controversial for many reasons, but mostly because of the impact they have on the environment. These operations produce millions of tons of manure every year. Even when properly managed, the risks to the environment and public health are significant. For example, the U.S. Environmental Protection Agency (EPA) says that “Manure and wastewater from AFOs have the potential to contribute pollutants such as nitrogen and phosphorus, organic matter, sediments, pathogens, hormones, and antibiotics to the environment.” Operators of CAFOs are required to follow a compendium of federal and state rules.

Since federal and state rules govern the operation of CAFOs, should local governments also have a regulatory say? Stated differently, should states prohibit local governments from regulating or controlling the placement and operation of CAFOs in or near their communities?

On May 2, 2019, the Missouri Senate “passed a bill to block local officials from regulating industrial farms more strictly than the state does.” The measure, Senate Bill 391, states that “county commissions and county health center boards shall not impose standards or requirements on an agricultural operation and its appurtenances that are inconsistent with or more stringent than any provisions of law, rules, or regulations relating to the Department of Health and Senior Services, environmental control, the Department of Natural Resources, air conservation, and water pollution.” The Missouri House will now weigh in on the measure, a vote that could come as early as May 17.

On the one hand, placing restrictions on local control of CAFOs provides a uniform policy to producers within the state and allows states to balance the benefits from the efficient production of beef and pork, as well as dairy, broilers and eggs, with the environmental, health and other costs. Estimating benefits and costs might also be easier at a macro than at a local level. On the other hand, the direct environmental impacts of CAFOs are almost always felt at the local level. It’s hard to ignore the complaints of neighbors who suffer because of the stench of a pig manure lagoon a mile down the road, unless you are a state legislator who doesn’t live near one.

Another way to think about this is as a battle between economic interests and quality of life. Are the economic benefits from having a large hog operation in the community worth the unpleasantness of living near one? Not surprisingly, states usually opt in favor large scale agriculture, especially if large agribusinesses have well-paid lobbyists.

The issue in Missouri is not new. A story in the Chicago Tribune in 2006 entitled “Hog Wars: Missourians Raise Stink Over Giant Operations” tells of how communities are being divided over the issue of proposed CAFOs. One state representative quoted says “It’s the new Civil War.” Local communities want to regulate, but “Agribusiness interests … are alarmed by this rural insurrection and have been pressuring the state legislature to outlaw such bans.” I guess it is finally coming to pass.

This tension between the interests of the many (via the State) and the concerns of the few  (via local communities) is as old as time. It also hearkens to the conflict between utilitarian and Kantian perspectives. A utilitarian position might favor the operation of CAFOs, especially those that are improving efficiency and are effective in controlling pollution, such as utilizing manure as an energy source. Videos explaining efficiency improvements in and other issues relating to pork production are here, and a National Geographic article on the topic of turning manure into energy is nicely titled: “Harnessing the Power of Poo: Pig Waste Becomes Electricity.” A Kantian position will often support the perspective of local communities. Local governments argue that because the operations are in their communities, then principles of autonomy and rights favor their being able to have some say in how they operate, or even if they should operate at all. After all, if states benefit from CAFOs, should they do so at the cost of local communities?

Reconciling utilitarian and Kantian dilemmas is not easy. But the problem with CAFOs is largely one of our making. We like meat. We like it cheap. And we eat lots of it. If we ate less meat then the economic justification for CAFOs might be weakened. That’s a tough call for someone looking forward to his next burger.

The final exam game

Final examYou are taking a college course. Near the end of the semester, the professor reminds the class that the final exam is worth 100 points and is scheduled for the day and time specified by the University. The professor then says there are two options for the final exam.

Option 1 is a regular final exam–the kind you are accustomed to having. You come to class on the day and time of the exam and take the test. Whatever score you get on the exam will be your final exam grade.

Option 2 is the following “final exam game”: If no one shows up to class on the day and time of the exam, then everyone in the class will receive a 90 on the test (out of 100 points). However, if any students show up on the day and time of the final exam, then they will take the test and the score they receive will be their final exam grade. However, anyone who did not show up will receive a 0 on the final because they did not come to class to take the exam.

The professor says he will only allow option 2 (“final exam game”) if 100% of the class agrees to proposal. If even one student chooses option 1, then the class will have a regular final exam (option 1).

So, as a student in the class, do you choose option 1 or option 2? Does it matter how many students are taking the class? If you like the idea of not having to take a final exam, and if you have an opportunity to talk with others in the class, then how do you convince everyone to select option 2?

Incidentally, there is at least one strategy for convincing the class that option 2 is not risky, and it doesn’t involve coercion or threats of violence.