Lost In Translation: What Epidemiologists Get Wrong In Communicating With the Public

I’ve been an epidemiologist for 14 years, and I still make some of these mistakes when translating epidemiological findings to the public.

Humans are weird when it comes to assessing risk. We’ll avoid air travel, choosing instead to travel by vehicle clear across the country not only because of the cost of air travel but because of fears of the plane crashing. In fact, when two airplanes crashed in 2018 and 2019, a whole fleet of airplanes was grounded and many people felt afraid of flying a “737” type of airplane. (It was the “737 Max” type that crashed, not all 737s.)

When we polled teenagers on their use of vaping products, they told us that tobacco (in the form of smoking) was bad for them, but they had no issues in vaping nicotine. (Nicotine is very addictive, and it can cause many problems in developing brains.) This is because they saw vaping as less dangerous than smoking, but they couldn’t see that vaping was more dangerous than not vaping. It seemed that no one had told them of any dangers to vaping, not until the 2019 outbreak of vaping-related lung injuries. Even then, the use of vapes continued to increase, “popcorn lung” be damned.

The misunderstandings of risk that I find most frustrating come from anti-vaccine groups and individuals. The same people who will get in a car and drive to and fro will not vaccinate because of a 1 in a million chance of any outcome from a vaccine, and an infinitesimally smaller one of death from a vaccine. They will also knowingly expose their children to the 1 in a thousand chance of a complication from measles because of their unfunded fear of vaccines. They are really that bad at estimating risk.

Now, after more than one year in a pandemic situation, and with many vaccines coming through the pipeline, we epidemiologists are yet again tasked with taking an immense amount of information and translating it to the public in an attempt to help the public make the best decision possible. Unfortunately, we are making some mistakes, and those mistakes are confusing the public. In the worst of cases, we are giving anti-vaccine groups and individuals talking points for them to abuse and misinform.

So let me tell you about some of those mistakes and what we can do to correct them, starting with one that is taught in Epidemiology 101.

One: Age Standardization

When comparing two populations, especially if you are comparing them in terms of something that is very tied into age, then you need to do age standardization. This is because you want to make sure that the age structure — the age pyramid — of the two populations is taken into account and not confounding the observation you are making.

For example, we know that COVID-19 affects older populations more than younger ones. If you are comparing City A to City B, and you find that they both have the same number of cases per capita, you might be inclined to think that they have the same overall burden of the disease. But what if I told you that City B has more residents over the age of 65 than City A? How would that change your view of the burden of disease?

This is where age standardization comes into play. You want to adjust the numbers to a third population that we call a “Standard Population.” Usually, we use the national totals by age groups to create that standard population. We then take the rate of disease in each age group from the two populations and apply those rates to the number of people in the standard population.

Essentially, simply adjusting for population but failing to adjust for the different age structures in that population is not enough. You need to adjust for both, or even for more confounding characteristics. Failure to do so leads to mistaken conclusions.

Two: Secular Trends

There are times when you see numbers on a chart and identify an increase or decrease in whatever it is that you are analyzing, and you interpret that rise or fall as being significant. Then you zoom out and notice that the trend you were looking at has actually been going on for a long time. While you were thinking that the recent rise in cases of a disease was something significant, you then find out that the rise has been going on for quite some time.

Of course, this doesn’t mean that you stop trying to do something about the disease or condition that you see climbing in incidence or prevalence. But it does mean that you should probably have an open mind about the causes for it. If something has been slowly rising over time, you need to look at what has made it rise throughout, not just what has driven it recently. Otherwise, you might find yourself going after something that isn’t driving the true increase.

Three: Modifiable Areal Unit Problem in Mapping

Here is something that may come as a bit of a surprise to you: Boundaries are drawn by people. When we present maps — specifically maps with some sort of count or rate represented on them — we need to be mindful of the boundaries we set on those maps. If you had 15 cases of HIV occur in a big state like Texas versus an area the size of a city, versus an area the size of a few blocks, the way you map it could make all the difference in how the risk is perceived.

Perhaps one of the best examples of the “MAUP” is gerrymandering. Gerrymandering is the purposeful manipulation of how electoral districts are drawn to dilute the political representation in a given area. Depending on how you draw the boundaries of a political district, the makeup in terms of demographics and political leanings of that district are defined. So, if you leave the drawing of the boundaries of analysis to people with an agenda, your entire analysis is subject to the influences of those agendas.

When presenting a map, be mindful of the units of aggregation that you are using. Be sure that they represent something that is meaningful. When you can represent neighborhoods whose residents have more things in common than residents of the same ZIP code or state, go with what makes sense. Also, be mindful of the colors you’re using on your map… Are you showing magnitude or are you showing identity?

Four: Spatial Autocorrelation

Look around your neighborhood. Do the people living in it look like you? Do they earn money in generally the same range as you do? Do you share the same political leanings? Chances are that people close to you are a lot like you in many ways. Now, think about the following scenario…

You’re an epidemiologist, and you notice a “cluster” of cases of lung cancer around a neighborhood. You go to the neighborhood and confirm that, yes, the rate of lung cancer is quite high there. Then you drive yourself crazy looking for contaminated soil or water, but you can’t find the source of the carcinogens. But then you raise your head up away from the microscope and look at your cases… They’re all older in age and have been smoking since their teenage or young adult years. They’re all similar to each other, with similar habits. All because they live close to each other, are friends, and are in the same age group.

Geographer Waldo Tobler said that “everything is usually related to all else but those which are near to each other are more related when compared to those that are further away.” So, yes, everyone had the same environmental and long-term exposure, but it was more because they are close to each other in space; and they are close to each other in space because they’re close to each other in social characteristics.

As a result, when we are talking to the public about clusters of this or clusters of that, or hot spots of this or hot spots of that, we need to make sure that we are not just seeing autocorrelation — spatial or otherwise — at work.

Five: Comparing Apples to Oranges Without Turning the Apples to Oranges First

If I asked you who is the better student, a student who scores a 1400 on the SAT or one who scores a 700 on the GRE, how would you answer that question? Many times, we epidemiologists forget to standardize the scores or values of two sets of numbers so that they can be compared fairly with one another. We already talked about age standardization, but how do we deal with scales that have different ranges? If you only know the median income of one group and the average spent on rent in another, how would you know who is the wealthier group?

This is where standardizing scores comes into play. It’s a rather simple thing to do mathematically, but the reasoning behind it can be a little tricky. I find that this video helps:

Six: Odds Are Not Probabilities

This one is easy, and we fall victims to this error because of how we use language. When we are talking about the probability of something, we are basically saying the “chance” of something happening. For example, when the weather forecast calls for a 30% chance of rain, it means that, of the last 100 times the weather conditions were similar to the conditions at the time of the forecast, it has rained 30 times.

However, we also talk about the odds of something happening as the “chance” of something happening. We even have the expression “what are the odds?” that is interchangeable in the English language with “what are the chances?” But mathematically speaking, odds is not the same as probability, and that is important. Odds is basically the number of times something happens divided by the number of times it does not happen. When you toss a coin, the odds of it being heads is equal to the odds of it being tails. This is the same for probability. But probability and odds start being different when the number of possible events gets higher, and then they converge again as the probability of an event gets rarer.

Furthermore, odds are more about binary outcomes like successes and failures, disease or no disease, dead or alive. Probabilities are better used for things that exist in a realm of possibilities, like the probability of a person having a BMI above 25 or the probability of a child scoring in the 90th percentile or above in an exam. As you saw in the video above, out of four possible outcomes, the odds of one outcome is 1/3 but the probability is 1/4; while the odds of not that outcome is 3/1 but the probability is 3/4. (If you know fractions, you know why those two numbers are definitely not the same.)

So be careful of interpreting odds from logistic regressions in research papers and the like. Do not interpret odds as probabilities. Or, if you do want to interpret odds as probabilities, use the proper math to convert odds to probabilities.

Seven: Relative Versus Absolute Measures of Change

If I told you that a certain public health intervention cut the incidence of a disease “by half,” you would think that it was a heck of an intervention. If I then told you that we went from two cases to one, you would probably shake your head and walk away. And you would be right in doing so.

Too often, we epidemiologists say that there has been an X-percent increase or decline in some disease or condition, but hidden behind that percentage is a non-significant number. Or, similarly, we are shy about reporting a low relative change (the percentage) when the absolute change (the number) is quite significant. Whenever possible, we should report both types of statistics, and we should be ready to explain why they are significant.

For example, when anti-vaccine and/or anti-mask people tell me that COVID-19 is not deadly because “99.9% of people survive it,” I am quick to explain to them that 0.1% of the United States population is still about 320,000 people, and that number of people dying needlessly in one year time is significant. (We have had way more than that since the pandemic began, so their math does not quite check out because math is hard.)

Eight: “At this time last year…”

Something interesting happened when I was working as an epidemiologist at the Maryland Department of Health 12 years ago. A reporter from a local newspaper claimed that the influenza vaccine had been very successful because there were fewer cases of influenza as of the middle of January than the number of cases “at this time last year.” I really, really, really wanted to correct him, but I didn’t. To be honest, I sort of regret it.

The reason why he was making the wrong comparison was that influenza seasons peak at different times each year, usually between October and May. (Except for the 2020–2021 season, where the interventions put into place for COVID-19 managed to keep influenza at an all-time low.) One season, the peak happened in November, so there were plenty of cases by that January. The next season, the peak happened in March, so there were very few cases by that January.

Like the reporter, many of my colleagues fail to take into consideration the seasonal variation of many diseases and conditions. Heck, even pregnancy has a seasonality to it. It is effortless to forget this, and then we end up comparing one period of time to another without taking that seasonality into account. I hope that you see where that can be an issue and lead to erroneous conclusions.

Nine: Know Your Bugs

A few years ago, during the Zika epidemic, a very bright and ambitious young epidemiologist presented a map of Zika risk in the United States. In his map, the epidemiologist used color codes for different zones of the country and what the risk (none, low, elevated, high) was of contracting Zika. The thing that stood out to me is that he labeled certain areas of the Rocky Mountains as having an elevated risk.

The thing is… Well, the Aedes species of mosquitoes that carry Zika don’t do well in high elevations. In fact, one of the first things we field epidemiologists do is ascertain the elevation of a place in order to estimate risk of mosquito-borne infections so we can direct our resources better. When we pointed this out to my colleague, he stated that he was not aware that mosquitoes didn’t like high elevations. He eventually corrected his map just in time for a grant application, but I can only imagine the kind of grant refusal he would have faced if someone reviewing the grant caught the mistake.

A few years before that, a physician asked me about Borrelia burgdorferi, asking if I had heard of it. “It’s a parasite,” he said. “And I just attended a presentation by an epidemiologist on how it’s becoming more predominant.”
“Oh, Lyme Disease?” I asked.
“Lyme Disease?” he asked back.
“Yes, Lyme Disease,” I said.
Borrelia is Lyme Disease?”
“You’re just getting that now?”

You see, the epidemiologist who gave the presentation never mentioned that the bug that he was presenting on was the causative agent of Lyme Disease. He kept calling it a Borrelia infection. Yes, I know that one would expect physicians to know their bugs, too. I am just trying to show you how those kinds of assumptions can lead to confusion. Just go ahead and mention the disease, other names for the disease, and the causative agent. Be super clear about it to make sure no one is missing something.

The Take Home Point

Science is already complicated enough as it is, and, sadly, many people around the world do not get an opportunity to fully understand it or even be competently familiar with it. That is why we have people who believe the Earth is flat, or they do not believe that vaccines work. Or they are easily convinced by a politician (A POLITICIAN!) that global climate change is not real, so we can continue to burn fossil fuels without consequences.

Because of this lack of familiarity with science, we epidemiologists need to be clear in how we relay information based on science and evidence — and data — to our audiences… Even audiences that should be more familiar with science but are not too keen on niche subjects, like Lyme Disease’s causative agent. We need to know our audiences as well, being culturally competent. And we need not fall into our traps from our biases as well. If we manage to get it right, public health is benefited and the public is better prepared to face the next challenge we collectively face.

René F. Najera, MPH, DrPH, is a doctor of public health, an epidemiologist, amateur photographer, running/cycling/swimming enthusiast, father, and “all-around great guy.” You can find him working as an epidemiologist a local health department in Virginia, grabbing tacos at your local taquería, or on the campus of the best school of public health in the world where he is an associate in the Epidemiology department. All the opinions in this blog post are those of Dr. Najera and do not necessarily represent those of his employers, friends, family, or acquaintances.

Doctor of Public Health in Epidemiology. Associate at JHSPH. Adjunct at George Mason Univ. Epidemiologist at a large County Health Department. Father. Husband.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store