by James Morris
Ads for medicines or cosmetics often make the claim that a result is “clinically proven.” Proof, however, isn’t possible in science and medicine. That can be a strength, and it requires frequent reminding so we understand how science and medicine really work.
As a college professor, I read a lot of student writing. As you might imagine, I come across some common mistakes. Many students (and adults) confuse “it’s” and “its.” Others don’t understand the difference between “affect” and “effect.” And don’t get me started on the difference between “less” and “fewer” or “between” and “among.” Getting words and grammar right is not just academic; it can also have financial consequences, as a Maine company recently learned.
Some errors are specific to science writing. “Data,” for instance, is plural, not singular. So, it’s (not its) “The data are interesting” not “The data is interesting,” even though the second version sounds better. The singular is “datum.”
These kinds of errors are common but are understandable and easily corrected. However, there is one mistake that is more problematic because it reveals a basic misunderstanding of the nature of science. It’s when students use the word “prove” or “proof.”
What’s the problem here? You might think that these words are common in science writing, since science is all about proving things.
Actually, it’s not.
Outside of mathematics, science doesn’t prove anything. Results may support one idea or another; they can point in certain directions; they may favor one hypothesis or another. But they don’t prove.
The same is true in medicine, where doctors and patients have to learn to understand and navigate uncertainty. As the philosopher Karl Popper wrote, “Knowledge consists in the search for truth…It is not the search for certainty.”
This doesn’t mean that science is tentative or built on shaky ground. In fact, as pointed out elsewhere, it’s just the opposite: The willingness to adjust our models on the basis of new evidence makes science an extraordinarily powerful way to understand the world around us.
To see how science really works, let’s consider an example. Many people have heard that mutations are “random.” Darwin’s theory of evolution by natural selection is built on this idea.
Sometimes the random nature of mutations is used to criticize natural selection. Darwin once wrote to a friend, “I have heard … that Herschel says my book ‘is the law of higgledy-piggledy.’ What this exactly means I do not know, but it is evidently very contemptuous.”
The random nature of mutations doesn’t mean natural selection is random – it’s not – but it does mean that mutations occur without regard to whether or not they would benefit an organism.
How do we know this? One of the first pieces of evidence came from a famous experiment done by Salvador Luria (who I once met!) and Max Delbrüch in 1943. They were trying to figure out whether beneficial mutations occur randomly or whether the environment causes these mutations.
This is not an idle question. It’s a fundamental distinction about the nature of mutations and therefore how evolution works.
For example, when antibiotics are applied to bacteria, do some bacteria just happen, randomly, to have a mutation that renders the bacteria resistant? Or does the antibiotic cause mutations that render the bacteria resistant? The same question could be asked about pesticides being applied to insects or herbicides to crops.
To answer this question, Luria and Delbrüch performed a simple experiment. They grew several populations of bacteria. Then they poured the bacteria onto plates with a virus called phage that kills bacteria that are sensitive, but not ones that are resistant due to a mutation. Then they counted the number of bacterial colonies resistant to the phage.
If mutations are random, some plates would be expected to have lots of phage-resistant colonies, and others few or even none. If mutations are caused by the addition of phage, then all plates should have roughly the same number of phage-resistant colonies.
What did they find? They found that the number of phage-resistant colonies was highly variable from plate to plate: Some had many, some had few, and many had none. This is consistent with the first explanation – mutations are random – and not consistent with the second – they are directed by the environment.
Did they prove that mutations are random? No. Perhaps their interpretation of the number of colonies was incorrect. Or perhaps their results only apply to one type of bacterium and one type of mutation. Do their results apply to all mutations in all organisms?
But does the result support the idea that mutations are random? Yes. More experiments were done, in other organisms with different types of mutations. From the results of all of these experiments taken together, we conclude that mutations are indeed random. We still haven’t proved the point (we haven’t studied all mutations in all organisms), but all of the evidence we have is consistent with this explanation.
In science, we investigate the world around us in two ways. In one approach, we make lots of different observations, and from these, come to a general conclusion. This is called induction and is illustrated by the way that Darwin came up with the theory of evolution by natural selection. He pieced together lots of seemingly unrelated observations about fossils, anatomy, embryology, and biogeography, and from these developed his theory.
The opposite approach, called deduction, starts with a general explanation and is followed by tests and observations. Einstein predicted the existence of gravitational waves in 1916, but they weren’t directly observed until 2016, earning Science’s Breakthrough of the Year.
Each approach has strengths and weaknesses. Induction is unbiased, since you don’t know what you are testing before you start, but is limited by the observations you make. For instance, you observe a lot of white swans and conclude that all swans are white. This is all well and good, until you happen to bump into a black one. Deduction avoids this issue, but, by starting with an explanation, runs the risk of being biased.
The scientific process is often a blend of the two – we make observations and from these come up with a hypothesis (that’s induction), but then test the hypothesis through experiments and additional observations (that’s deduction). Not all science takes this approach. For example, we are constantly mining genomic data for patterns; there is no explicit hypothesis that is being tested.
With these approaches, you might think we can finally prove things, but, alas, we can’t. A correct hypothesis will make accurate predictions. But an incorrect hypothesis will give either accurate or inaccurate predictions.
Consider how salmon return to the place they were born to spawn. Maybe they use sight (this is the hypothesis). I blindfold salmon, and reason that they won’t return (this is the prediction). If in fact they fail to return, I may have a correct hypothesis. But I could also have an incorrect one – perhaps, by blindfolding the salmon, I also interfered with some other sense, like smell, that they actually use.
Because accurate predictions can come from correct and incorrect hypotheses, we are never 100% sure our hypothesis is right. This is why Einstein said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”
So, there is no proof in science. Nevertheless, when multiple independent lines of evidence all point to the same conclusion and explain a diverse range of observations, we call that a “theory.” Theories aren’t proven, but the chromosome theory or germ theory or theory of gravity or theory of evolution are all well tested, understood, and accepted as the way the world fundamentally works.
Sometimes, people use the lack of proof in science to undermine it, like in recent discussions of climate change or evolution. These kind of attacks, however, reveal a basic misunderstanding (or willful ignorance) of the way science works, rather than a valid point of critique.
I hope by now I’ve proved my point.
© James Morris and Science Whys, 2017