Establishing a Cause-Effect Relationship
How do we establish a cause-effect (causal) relationship? What criteria do we have to meet? Generally, there are three criteria that you must meet before you can say that you have evidence for a causal relationship:
First, you have to be able to show that your cause happened before your effect. Sounds easy, huh? Of course my cause has to happen before the effect. Did you ever hear of an effect happening before its cause? Before we get lost in the logic here, consider a classic example from economics: does inflation cause unemployment? It certainly seems plausible that as inflation increases, more employers find that in order to meet costs they have to lay off employees. So it seems that inflation could, at least partially, be a cause for unemployment. But both inflation and employment rates are occurring together on an ongoing basis. Is it possible that fluctuations in employment can affect inflation? If we have an increase in the work force (i.e., lower unemployment) we may have more demand for goods, which would tend to drive up the prices (i.e., inflate them) at least until supply can catch up. So which is the cause and which the effect, inflation or unemployment? It turns out that in this kind of cyclical situation involving ongoing processes that interact that both may cause and, in turn, be affected by the other. This makes it very hard to establish a causal relationship in this situation.
Covariation of the Cause and Effect
What does this mean? Before you can show that you have a causal relationship you have to show that you have some type of relationship. For instance, consider the syllogism:
if X then Y
if not X then not Y
If you observe that whenever X is present, Y is also present, and whenever X is absent, Y is too, then you have demonstrated that there is a relationship between X and Y. I don’t know about you, but sometimes I find it’s not easy to think about X’s and Y’s. Let’s put this same syllogism in program evaluation terms:
if program then outcome
if not program then not outcome
Or, in colloquial terms: if you give a program you observe the outcome but if you don’t give the program you don’t observe the outcome. This provides evidence that the program and outcome are related. Notice, however, that this syllogism doesn’t not provide evidence that the program caused the outcome — perhaps there was some other factor present with the program that caused the outcome, rather than the program. The relationships described so far are rather simple binary relationships. Sometimes we want to know whether different amounts of the program lead to different amounts of the outcome — a continuous relationship:
if more of the program then more of the outcome
if less of the program then less of the outcome
No Plausible Alternative Explanations
Just because you show there’s a relationship doesn’t mean it’s a causal one. It’s possible that there is some other variable or factor that is causing the outcome. This is sometimes referred to as the “third variable” or “missing variable” problem and it’s at the heart of the issue of internal validity. What are some of the possible plausible alternative explanations? Just go look at the threats to internal validity (see single group threats, multiple group threats or social threats) — each one describes a type of alternative explanation.
In order for you to argue that you have demonstrated internal validity — that you have shown there’s a causal relationship — you have to “rule out” the plausible alternative explanations. How do you do that? One of the major ways is with your research design. Let’s consider a simple single group threat to internal validity, a history threat. Let’s assume you measure your program group before they start the program (to establish a baseline), you give them the program, and then you measure their performance afterwards in a posttest. You see a marked improvement in their performance which you would like to infer is caused by your program. One of the plausible alternative explanations is that you have a history threat — it’s not your program that caused the gain but some other specific historical event. For instance, it’s not your anti-smoking campaign that caused the reduction in smoking but rather the Surgeon General’s latest report that happened to be issued between the time you gave your pretest and posttest. How do you rule this out with your research design? One of the simplest ways would be to incorporate the use of a control group — a group that is comparable to your program group with the only difference being that they didn’t receive the program. But they did experience the Surgeon General’s latest report. If you find that they didn’t show a reduction in smoking even though they did experience the same Surgeon General report you have effectively “ruled out” the Surgeon General’s report as a plausible alternative explanation for why you observed the smoking reduction.
In most applied social research that involves evaluating programs, temporal precedence is not a difficult criterion to meet because you administer the program before you measure effects. And, establishing covariation is relatively simple because you have some control over the program and can set things up so that you have some people who get it and some who don’t (if X and if not X). Typically the most difficult criterion to meet is the third —ruling out alternative explanations for the observed effect. That is why research design is such an important issue and why it is intimately linked to the idea of internal validity.