Thinking Critically about Research

Sometimes what we want to be true gets in our way of knowing what is true. For example, last year an association of physicists invited its members to engage in a debate concerning the International Panel on Climate Change’s (IPCC) conclusion regarding man made global warming. In inviting articles for its professional journal, the association said, “We will not publish articles that are political or polemical in nature. Stick to the science!”

It is no secret that what passes for policy debate today is too often little more than ad hominem attacks, obfuscation, and the like. But when even physicists need to be reminded to stick to the substance, and at a time when many are asserting that scientific research will play a bigger role in the policy process, perhaps we should remind ourselves what we mean by research.

Research on public utilities can be characterized as positive or normative. Positive research describes what is. If positive research is empirical, then the study either applies statistical analyses to explain how something has happened, or applies simulations to predict what will happen. This empirical research will stand or fall on the validity of its inputs and the soundness of its quantitative techniques. If the research is theoretical, the researcher uses a set of presuppositions and logical steps to reach a conclusion, and the study will stand or fall based on the validity of those assumptions and the soundness of its reasoning.

In contrast, normative research argues for what should be, relying on a set of objectives, assumptions, and logical steps. Normative research stands or falls based on the applicability of the goals the author embraces, the validity of its assumptions, and the soundness of its logic.

There is a pattern here. All research can be validated or refuted based on the quality of inputs, presuppositions, and analytical techniques. The only time the researcher’s personal preferences should matter is when he or she advocates a particular policy. For example, economists often support policies that maximize efficiency, but as one of my professors emphasized very strongly when I was an undergraduate, a college degree doesn’t improve one’s values: An average citizen’s preference for stability is just as important as reducing costs.

Generally research contains both positive and normative elements. For example a recent PURC project surveyed Floridians about their telecommunications use and participation in the Lifeline program, that offers low income households price discounts on their telephone service. The research was largely positive, focusing on what people told us, but it also had a normative element in that we made suggestions about what policy makers might do as a result of our findings.

What does this mean for the ways in which research can inform policy? First, it clarifies what are appropriate practices for challenging research, namely, that positive conclusions of research are by definition true if the data/inputs/assumptions are valid, and the logic/quantitative techniques are sound.

Second it helps us put criticisms in context. Recently, the head of an institute stated publicly that policy makers should listen to him because he is more trustworthy than others. In another meeting, a scientist dismissed those who disagreed with him as “jokers” and “not serious” without once pointing to any flaws in their work. Such personal attacks raise a red flag that perhaps either something in the work being supported cannot stand up to scrutiny, or that the work that is being dismissed has merit and is a threat to the policies advocated.

Where does this leave our physicist friends? Hopefully, they will clearly distinguish between their “what-is” research from their “what-we-want” advocacy. Utility policy could benefit from clarity such as this.


On Research

A commissioner recently asked how much research costs. That’s a good question. Over the past few years, less than a fourth of PURC’s research was sponsored by someone, which means PURC faculty spend a lot of uncompensated time pursuing ideas. Why would someone invest so much time writing papers that no one requested, and that require extensive effort to get published?

Frankly, research is simply in some people’s DNA. We academics are unlikely to use our time in airport terminals editing a legal brief or organizing a project.

We will be found, however, trying to flesh out an idea, testing the validity of received wisdom, or teasing out insights that might be as obscure‐sounding as which lessons from radio spectrum auctions apply to carbon emission cap and trade schemes. When analyzing problems and writing, we feel a rush – we “get in the zone” – just as anyone who enjoys their work.

For those whom research is largely internally motivated, we engage in our work because we feel a compulsion to work through something complicated, to work on ideas we think are important, or to produce a paper that will get noticed or make a difference.

So if we’re selecting topics in this manner, does that mean that even positive research – research that is supposed to simply describe and not persuade – has an agenda? Absolutely. Time is limited and we have to set priorities. At any particular point in time, I have at least 10 papers in various stages of development. I have to decide which ones get my attention first. For example, one paper‐in‐progress uses recent advances in neuroscience to explain dysfunctions in the policy making process. For the last three years the paper has sat unfinished because other topics, such as Lifeline participation, broadband development, and electric line under grounding, have been more urgent.

If academic research, or any research, has underlying agendas, how can we trust it? First, we need to subject the work to the scrutiny that I describe in Thinking Critically about Research, namely that one must check to see if the inputs/assumptions/data are valid, if the logic/analyses are sound, and in the case of normative research, if the policy objectives are acceptable. If the research fails on any of these points, then we should reject at least some of its conclusions.

What if the research is so highly technical that it is understandable only to a small audience? Then two things come into play. The first is peer review, which is the process by which academic research is subjected to review by technical experts who scrutinize its merit. If the reviewers do their job, the final paper is correct for the situation to which it claims to apply. Peer review isn’t perfect, and sometimes the academic community gets caught in paradigms or its own received wisdom, but the process appears to work more often than it fails.

The second solution to the understandability problem is to challenge researchers to give intuitive explanations. Most policy research is about actual problems experienced by real people, and so should contain coherent stories that relate to life experiences. For example, although real option theory is highly technical and applying it correctly requires good mathematical skills, the basic intuition makes perfect sense: If a firm investing in generation is uncertain about which regulatory policies will apply once the plant is operational, would the firm be willing to pay money today to resolve some of the uncertainty? Of course, the answer is yes.

Still people worry that research sponsors bias research results. Sometimes the concern is well founded. Yet in my experience my colleagues at utility centers at other universities follow the same rule that we follow at PURC: If someone sponsors research, they have the right to identify the topic if they so choose, as well as to comment on the work, but we make public everything we find, regardless of whose ox gets gored, and we take sole responsibility for the outcomes. That can be risky for sponsors, and is why much research is done by consulting firms or in‐house think tanks. However, all sponsors with whom I have worked believe these rules add value.

Finally, centers like PURC find important synergies between our research and our educational outreach. The real world is a far more interesting place than any that I could invent, so my colleagues and I value our engagements with working professionals who present us with problems and ideas that we could not have come up with on our own. Also, research informs our teaching, so that our training program participants are able to engage with the people who are actively on the cutting edge of problem solving.

I think the relevant question is not how much research costs, but what exactly leads to good research. It certainly takes funding, but it also takes the right sort of people, a stimulating environment, a system that ensures integrity, and a devotion to learning.