A post on the Scientific American blog asks, "Is Economics More Like History Than Physics?" I do not think that this is a
useful question to ask, in that it implies that we must choose one approach or
the other. A much better question is how we might use each kind of approach
(historical and scientific), what insights there are to be gained from each,
and what pitfalls each presents. This is really a two-sided argument (or
perhaps a two-front war): there is nothing inherently wrong with the use of
mathematical tools to analyze economic questions, and these methods have
generated many valuable insights; but non-mathematical methods may also be
useful.

First, in support of the math. Here is the best argument against the categorical rejection of the use of mathematical models to analyze social phenomena that I can give: anyone making any kind of an argument about a social phenomenon (which critics of mathematical modeling often do) is doing essentially the same things that would be done in a mathematical argument. That is, the arguer identifies an issue, focuses on some facts of interest and throws out some of the detail, and draws a logical conclusion from this set of facts. The real benefit of a mathematical analysis is that it makes a logical argument more precise, drawing attention to the mechanics of the argument and exposing flaws. The mathematical analysis often adds value to the logical argument itself, and it is generally worth exploring this possibility. Same goes for the use of numerical data.

Although social phenomena tend to be complex, sometimes
staggeringly so, this does not preclude the use of tools that are also employed
in the analysis of physical phenomena. Certainly we cannot understand the
macroeconomy in all its complexity, and any macroeconomic inferences we draw or
predictions we make will necessarily be imprecise; but you can say exactly the
same thing about meteorology. Analysis of either physical or social phenomena
involves gathering data, building models according to some set of principles,
and weighing models based on explanatory or predictive power within some margin
of error. Between the social and the physical, there may be differences in the
precision with which we can capture the causes and effects in which we are
interested, but there is not a fundamental difference in the process or in what
we might hope to achieve.

On the other hand, a danger of such modeling is that it
lends a pretense of objectivity to an argument. There is a temptation to think
that if the math is sound, the argument must be correct, when in fact there is
always a question of whether the model is applicable to the actual issue that
it purports to explain. This is not necessarily a problem, just a possibility;
authors and referees do tend to address this issue when writing and reviewing
pieces of research. At its worst, the math becomes a thing in itself and
distracts from consideration of the most useful way to address a question.
There also seems to be a bit too much focus on finding analytical results that
are counter-intuitive. Of course it is interesting if a model produces such a
result, but if not, this does not mean that the question was not
interesting or important or that it was not worthwhile to use a mathematical
model to sort it out.

Similarly, undue attention to a specific level of
statistical significance can draw focus away from what statistical analysis is
supposed to achieve. A study that demonstrates a lack of significance--i.e.
fails to reject a null hypothesis--tends not to be publishable, even though
such a result may be very important. With all the data available to us
nowadays, there are so many questions we can answer, but there are also all
kinds of spurious correlations waiting to be found. Thus, for any given null
hypothesis, it would be useful to know about studies testing that
hypothesis that fail to reject it. Furthermore, for any empirical demonstration of an
apparent relationship between observable variables, it is more important than
ever to have a narrative that makes sense of that relationship. Such a
narrative is often based on qualitative information.

A combination of qualitative and quantitative analysis
can be especially compelling, and a stand-alone non-mathematical argument or
analysis can be interesting and illuminating in its own right. In some cases, a
logical argument accomplishes as much as a mathematical version of the argument
would. Some questions are not amenable to any kind of mathematical analysis
(examples below) and are better if approached with some other method:
historical, ethnographic, etc. A pitfall of qualitative arguments is that it is more feasible to tell
stories that are superficially appealing but do not hold up under closer
scrutiny. One who wants to expound on a topic without having to think it
through very carefully would not be interested in the precision that
mathematical analysis demands. However, as with the perils of mathematical analysis,
this is not a necessary problem, just a potential one, and it is the kind of
thing that tends not to survive peer review.

Here is an example that has shown up on this blog a couple
of times: Republic, Lost by Larry Lessig. Although he describes it as a
non-academic book, he makes a thorough, detailed argument, albeit one that cannot really be done
mathematically. His thesis is that the dependency of U.S. elected officials on campaign donations has a corrupting influence on the democratic
process. He presents a lot of information and puts it together into a very
compelling narrative. There is empirical research about the influence of money
in politics, which mostly fails to find an influence, but this is attributable to a limitation in the availability of quantitative data. The parts that are
straightforward to quantify are campaign donations and floor votes in the House
and Senate. However, as Lessig argues, floor votes are a small part of the
lawmaking process. A bill must be proposed in the first place, has to get through a
committee, etc, and favoritism toward donors can influence every part of this
process. There is thus an argument to be made, but it cannot be made
quantitatively, at least not with currently available data.

I have two substantial papers with no math in them, and
while I would not claim that they are exemplars of qualitative research, I
think that each one is worthwhile.

- "Self-interest vs. Greed and the Limitations of the Invisible Hand," recently published in the
*American Journal of**Economics and**Sociology*, makes the point that greed is only guaranteed to have positive consequences if markets operate within an ideal system of laws and enforcement, i.e. if the legal system is able to constrain any behavior that might be detrimental to value creation. I present three episodes from recent history where such constraints were not in place and greed had negative effects. To make this argument quantitatively, there would be two ways to go: a theoretical model or an empirical argument. I did not even think about creating a model to answer this question because I knew that there would not be anything interesting about the workings of the model, and that the results would come straight from the assumptions: it would depend entirely on how I parameterized greed. The interesting part is justifying the model as the right way to think about the situation, which is essentially what I did in the paper. Gathering data on greed and its effects might be possible in theory, but I would expect that any means of doing so would be open to major objections. Even if I could get some data to work with, I think any empirical analysis would raise more questions than it would answer and would only have distracted from the point I was trying to make. - "Public versus Private Information Provision" (to which I
also refer in this post) presents a framework for evaluating whether it is
better for the government to regulate a given market for informational reasons, or if this role is better left to the market. The argument I present could
certainly be translated into a mathematical model, but as in the above example,
the mechanics of the model would not be interesting, and the other issues are
more important: why this is the right model for this question and what this
model is telling us about the situation modeled. I did not posit a formal model
because the triviality of the model would be a target for criticism. As in the
greed paper, I would not want unnecessary mathematical detail to distract from
the point of the paper. This one has been revised and resubmitted to
*Economic Papers*, one of the few peer-reviewed economics journals that accepts non-mathematical work.

This all raises the question of what research really is, or where we draw the line between research and other activities (like journalism or blogging), however useful those activities may be. "That which is peer reviewed" does not answer the question, because then we have to decide what the criteria for peer review should be. One might say that research must be careful and thorough, that it must be well thought out in itself as well as put into its context among existing literature. Even if we agree about that statement, there can be reasonable disagreement about how it applies to any given piece of work. But surely the use of math should not be a decisive criterion. While it is hard to deny that mathematical economics has dramatically increased our understanding of economic phenomena, the use of mathematics could not possibly be either a sufficient or a necessary condition for intellectual rigor.

## No comments:

## Post a Comment