Causal Hypotheses in Medicine

Professor of Philosophy of Science John Worrall on the difference between correlation and causation, controlled experiments and the placebo effect

videos | December 9, 2020

What I want to talk about is causal hypotheses in medicine (things like cigarette smoking causes cancer, certain drugs cause relief from pain or amelioration of symptoms or cure) and trying to think about what those causal claims amount to and how they can be tested experimentally.

The first thing to notice is that they’re funny causal claims in that they’re not deterministic. Let me just explain what that means. Let’s think about Newton’s second law of motion, which says the total force acting on anybody is equal to the mass of that body multiplied by its acceleration, best read (although Newton didn’t write it this way) as acceleration equals force acting on the body, divided by m. That’s completely deterministic. If anybody of a given mass has a certain total force acting on it, then it’s bound to have the acceleration that’s dictated by that principle. So it’s one input in and a definite output out.

Professor of Philosophy of Science John Worrall on the scientific revolutions, falsifiability and what are the main features of a scientific hypothesis
Causal hypotheses like ‘smoking causes lung cancer’ are not like that because we all know lots of people who smoke 2-3 packs of cigarettes a day and don’t die of lung cancer, although they’re very likely, if they live long enough, at any rate, to have some smoking-related disease. But it’s not deterministic; it’s rather stochastic than fully deterministic. These sorts of hypotheses are all over the place: if you’re driving down the motorway in England, you get signs saying ‘Tiredness kills’, meaning ’tiredness causes accidents that may be fatal’. Again, nobody says that every tired driver is going to be involved in an accident: it’s something to do, obviously, with increased probability. That’s a stochastic causal hypothesis: you have a bigger chance of getting lung cancer if you smoke than if you don’t smoke.

So they’re probabilistic or stochastic causal hypotheses, but the cause of the probabilistic element can’t exhaust what they say for at least two reasons that I’ll at least try and make clear. One thing is that the relationship between the raising and lowering of probabilities is symmetric; that is, if the probability that you get lung cancer given that you smoke is higher than the probability of getting lung cancer if you don’t smoke, then it must be true just by the probability calculus: the probability, if you pick someone out who’s got lung cancer, that they have smoked is higher than the probability that they didn’t smoke. So the relationship is symmetric; it’s intuitive; you can easily prove it from the probability calculus, but it’s intuitive. If you take a random person who smokes, when you say ‘smoking causes lung cancer’, you mean that that person has a higher probability than a random person whom you picked who didn’t smoke. But if you’re looking at it the other way, if you’ve got a person who’s got lung cancer, the probability is much higher that they will have smoke. So you’ve got this two-way thinking.

But causal hypotheses are not two-way; there is only one way: it’s the smoking that causes that lung cancer, not the lung cancer that causes the smoking. It indicates that somebody’s likely to have smoked but that’s not a causal connection. So that’s one reason why it can’t just be a question of raised probability. The other is a famous problem that you get immediately in the first or second lecture in statistics though it’s very often forgotten: it’s the fact that there’s a difference between correlation and causation.

So standard examples are that the cock-crowing is constantly correlated with the dawn: whenever the cock crows, the dawn comes soon afterwards, but of course, that’s not the cause, the cock-crowing doesn’t cause it. The two events are correlated, but they aren’t causally connected.

Or take another example that I like to use: the fact that you’ll die tomorrow is much more likely if you were taken into hospital today than if you weren’t. Obviously, there are things like superbugs, so maybe there’s an actual causal connection, but on the whole, it’s not that going into a hospital is what made it much more likely that you would die. The two events are related to what’s called ‘common effects of an underlying cause’: you were taken into a hospital because you were very ill, and it was because you were very ill that you died the next day.

So there’s an important thing in it to recognize: in order to establish causation, it’s not enough to establish that you’ve got an increased probability, so it’s not enough to establish that smoking causes lung cancer to establish that the probability that you’ll get lung cancer if you smoke is higher than the probability if you don’t smoke. So there’s more to causation, and it gives us the rationale for thinking about controlled experiments, which are very much the norm in medicine.

Philosopher and sociologist of science Steve Fuller on the peer-review process, the relationship between scientists and policymakers, and public engagement in the scientific process
Let’s take another standard example: it wouldn’t be a very good test of the theory that giving people regular doses of vitamin C causes relief from colds if you gave a whole bunch of people who were suffering from colds vitamin C for a week even if they all recovered. You know from natural history that colds do generally clear up within a week in any event. So you need the contrast in order to establish causation; you would need to look at another group, those who weren’t given the vitamin C, and see whether the rates of recovery were different in the two groups. And that’s standardly what you do in medicine. When you’re testing some new therapy in medicine, you don’t just give it to a bunch of people who are suffering from some condition and see what happens. You also involve another group who are not given the treatment.

Now, it’s not good enough just to have a natural history group or even a control group: you’ve got to think about the properties of the control group relative to the people who are in this so-called experimental group that have been given the new therapy. Let’s say, for example, that you’re testing the theory about vitamin C and colds, and you’ve got a control group to which we’re not giving vitamin C, and let’s say there are more people who recover in the vitamin C group than in the non-vitamin C group. That doesn’t establish causation because there might be other differences in the people in the control group: they might be younger than the ones in the experimental group, they may have less well-established colds or less severe colds, and they may be fitter in general.

So you want to control deliberately for these different factors that may also have a role, so that it’s only when you get an improved performance in the experimental group that have been given the experimental therapy compared to that heavily worked on control group, that you can say that you’ve got evidence that the therapy caused any improvement that you observed.

Really, what you’re doing there is you’re dealing with the problem that I mentioned before, about the cock-crowing or the people being admitted to hospital: you’re trying to make sure that there’s no underlying common cause.

But that’s not enough: even if you’ve got the same age profile, the same fitness profile, and the same degree of severity of the colds in the two groups, that still doesn’t establish if there was an improvement in the experimental group that was caused by the vitamin C. It may well be (and this is the basis of randomized tests) that there’s some other variable you can’t control for every conceivable other variable that might have an effect. You’ve got the problem that Donald Rumsfeld used to refer to as ‘the unknown unknowns’: there might always be some other factor.

So the way that people try and deal with that is by randomizing, that is like tossing a coin: you’ve got a whole bunch of people, and you decide who goes into the experimental group and who goes into the control group in effect by tossing a coin or by using random number generators. That’s supposed to then make sure that at least probabilistically, the two groups, the experimental and the control group, are similar for all factors: both factors that you know might have an impact on whether you recover from your cold or not, and those that you don’t. There’s lots of discussion as to whether this is a genuine rationale; that’s an interesting area of current research.

The other thing that I should mention about these groups is that they’re also invariably placebo-controlled. Why should they be placebo-controlled? What does it mean? Well, placebo means from the Latin ‘I please’: it’s an inert substance that, however, looks identical to the drug. Let’s say it’s a straightforward case: it’s a pharmaceutical trial. Even if the two groups that you’re looking at, your experimental and control group, are the same in all pretrial factors, if you just did a natural history group, you just left the control group untreated and just see how they’ve managed with their colds, it might be that although there’s no efficacy in the vitamin C, for just being given treatment by a doctor, an authority figure, that might itself make people expect to get better. It’s very well established that expectations of relief do, in fact, produce relief, at least in respective certain conditions.

So you control by making it impossible as far as you can for the people to tell which group they’re in, so you create a placebo that is a treatment that you know doesn’t have any effect: it might be a sugar pill or a bread pill, but it looks the same, it’s got the same red coating, it’s given at the same time.

Because of the nature of causal hypotheses, you want to try and find out whether it was the vitamin C that did it: exclusive of the vitamin C you want all other factors to be controlled for.

Basically, what you’re looking to test is – well, okay, you gave someone the vitamin C, and they did recover, but did they recover because of the vitamin C? That is, would they not have recovered had they not been given it? And that control trial is an attempt to answer that question as systematically as we can.

Become a Patron!

Professor of Philosophy of Science, the London School of Economics and Political Science
Did you like it? Share it with your friends!
Published items
0779
To be published soon
+92
New