Leaders: Stop Confusing Correlation with Causation

We must learn to analyze data and assess causal claims — a skill that is increasingly important for business and government leaders. One way to accomplish this is by emphasizing the value of experiments in organizations. A large body of research in behavioral economics and psychology has highlighted systematic mistakes we can make when looking at data. We tend to seek evidence that confirms our preconceived notions and ignore data that might go against our hypotheses. We neglect important aspects of the way that data was generated. More broadly, it’s easy to focus on the data in front of you, even when the most important data is missing. This can lead to mistakes and avoidable disasters, whether it’s an individual, a company, or a government that’s making the decision.

We’ve all been told that correlation does not imply causation. Yet many business leaders, elected officials, and media outlets still make causal claims based on misleading correlations. These claims are too often unscrutinized, amplified, and mistakenly used to guide decisions.

Examples abound: Consider a recent health study that set out to understand whether taking baths can reduce the risk of cardiovascular disease. The analysis found that people who took baths regularly were less likely to have cardiovascular disease or suffer strokes. The authors conclude that the data suggests “a beneficial effect” of baths. Without a controlled experiment, or a natural experiment, one in which subjects are chosen randomly and without variable manipulation, it’s hard to know whether this relationship is causal. For example, it’s possible that regular bath takers are generally less stressed and have more free time to relax, which could be the real reason they have lower rates of heart disease. Still, these findings were widely circulated, with headlines like, “Taking a bath isn’t just relaxing. It could also be good for your heart.”

A large body of research in behavioral economics and psychology has highlighted systematic mistakes we can make when looking at data. We tend to seek evidence that confirms our preconceived notions and ignore data that might go against our hypotheses. We neglect important aspects of the way that data was generated. More broadly, it’s easy to focus on the data in front of you, even when the most important data is missing. As Nobel Laureate Daniel Kahneman has said, it can be as if “what you see is all there is.”

This can lead to mistakes and avoidable disasters, whether it’s an individual, a company, or a government that’s making the decision. The world is increasingly filled with data, and we are regularly bombarded with facts and figures. We must learn to analyze data and assess causal claims — a skill that is increasingly important for business and government leaders. One way to accomplish this is by emphasizing the value of experiments in organizations.

How Unsupported Causal Claims Lead Organizations Astray

A 2020 Washington Post article examined the correlation between police spending and crime. It concluded that, “A review of spending on state and local police over the past 60 years…shows no correlation nationally between spending and crime rates.” This correlation is misleading. An important driver of police spending is the current level of crime, which creates a chicken and egg scenario. Causal research has, in fact, shown that more police lead to a reduction in crime.

In 2013, eBay was spending roughly $50 million dollars per year advertising on search engines. An analysis from consultants had shown that in areas where more advertisements were shown, sales were higher. However, economists Tom Blake, Chris Nosko, and Steve Tadelis pushed the company to think more critically about the causal claim. They analyzed natural experiments and conducted a new randomized controlled trial, and found that these ads were largely a waste, despite what the marketing team previously believed. The advertisements were targeting people who were already likely to shop on eBay.

The targeted customers’ pre-existing purchase intentions were responsible for both the advertisements being shown and the purchase decisions. eBay’s marketing team made the mistake of underappreciating this factor, and instead assuming that the observed correlation was a result of advertisements causing purchases. Had eBay explored other factors that may have been responsible for the correlation, they would likely have avoided the mistake.

Yelp overcame a similar challenge in 2015. A consulting report found that companies that advertised on the platform ended up earning more business through Yelp than those that didn’t advertise on the platform. But here’s the problem: Companies that get more business through Yelp may be more likely to advertise. The former COO and I discussed this challenge and we decided to run a large-scale experiment that gave packages of advertisements to thousands of randomly selected businesses. The key to successfully executing this experiment was determining which factors were driving the correlation. We found that Yelp ads did have a positive effect on sales, and it provided Yelp with new insight into the effect of ads.

The Nobel Committee Champions Causal Inference Research

The landscape of empirical economics has dramatically changed over the past forty years. The field of economics has developed a set of skills that focus on assessing causal relationships. Two out of the last three Nobel Prizes have been awarded for this work. Two years ago, Abhijit Banerjee, Esther Duflo, and Michael Kremer shared the Nobel Prize for “for their experimental approach to alleviating global poverty.” This year, economists Josh Angrist, Guido Imbens, and David Card won the Nobel Prize for spearheading what Angrist dubbed the “credibility revolution” within economics. The committee praised Angrist and Imbens for “their methodological contributions to the analysis of causal relationships,” and Card for “his empirical contributions to labour economics.” They are pioneers in natural experiment research.

The development of the causal inference toolkit has been remarkable, and the work of the Nobel recipients is truly inspiring. But you don’t need to be a PhD economist to think more carefully about causal claims. A good starting place is to take the time to understand the process that is generating the data you are looking at. Rather than assuming a correlation reflects causation (or that a lack of correlation reflects a lack of causation), ask yourself what different factors might be driving the correlation — and whether and how these might be biasing the relationship you are seeing. In some cases, you’ll come out feeling reassured that the relationship is likely causal. In others, you might decide not to trust the finding.

If you are worried that a correlation might not be causal, experiments can be a good starting point. Companies such as Amazon and Booking.com put experiments at the heart of their decision-making process. But experiments are not always feasible. In these cases, you should think about — and seek out — other evidence that might shed light on the question you are asking. In some cases, you might even find a good natural experiment of your own.

Read More

Related posts

Scientists Looking at Vaccines With Time-Released Microparticles Believe No Booster is Necessary

An In-Depth Guide To Training Employees With Videos

Alert for Parents on Outbreak of Hepatitis among Children