top of page

Be the first to know

Leave your e-mail to receive our weekly newsletter and access Ask-Me-Anything sessions exclusive to our subscribers.

Core Concept: Propensity Scores - Balancing Covariates

  • Writer: Maria Alice Maia
    Maria Alice Maia
  • Oct 28, 2024
  • 2 min read

That voluntary leadership program you launched... did it actually create future leaders, or did it just attract the ones you already had?


This is a critical question for any HR Department, and a place where millions are wasted based on a simple, flawed analysis. This is a classic "Doing Data Wrong" scenario born from

selection bias.


The Wrong Way (The Illusion of Impact): You run an optional leadership course. A year later, you see that employees who chose to attend have a 30% higher promotion rate than those who didn’t. You declare the program a massive success and expand it.


The problem? You haven't measured the program's effect at all. You've measured the effect of ambition. The most motivated, high-potential employees are the ones who sign up for extra training in the first place. They were already on a faster track. You're comparing apples to oranges and calling it ROI.


The Right Way (Finding the "Statistical Doppelgänger"): In an ideal world, you'd run a randomized controlled trial (RCT). But what if that's not feasible? The next best thing is to get as close as possible by correcting for this selection bias. One of the most powerful tools for this is the Propensity Score.


The concept is both elegant and pragmatic:

  1. Calculate the Propensity: For every single employee (both attendees and non-attendees), you build a model that calculates the probability they would have attended the training, based on all their observable characteristics: past performance reviews, tenure, department, age, etc.. This single number—the propensity score—summarizes their underlying ambition and profile.


  2. Match or Stratify: Now, you can take an ambitious employee who attended the training and find their "statistical doppelgänger"—an equally ambitious employee with a nearly identical propensity score who did not attend. By comparing the promotion rates only between these carefully matched pairs, you neutralize the selection bias. You’re finally comparing apples to apples.

    ree

This method, a cornerstone of "selection on observables" analysis, allows you to isolate the program's true, causal effect. You might find the real impact is a 5% lift in promotions, not 30%. That might be a less exciting number, but it's the truth—a number you can use to make a real business case.


When I was building the People Analytics function at Ambev, we couldn't just look at simple correlations to understand drivers of engagement or turnover. We had to use rigorous causal methods like this to separate the signal from the noise and guide real strategic decisions. My academic work at Berkeley and FGV is grounded in this same principle: use the right tool to answer the right question.


This knowledge isn't mine to keep. It's for all of us to move beyond "kindergarten" comparisons and embrace the rigor that creates real value.


If you’re ready to stop being misled by selection bias and want to join a movement dedicated to true causal insight, subscribe to my email list.


And if you’re trying to measure the impact of a program right now and suspect selection bias is corrupting your results, book a 20-minute, no-nonsense consultation with me. Let’s figure it out together.


Stay Ahead of the Curve

Leave your e-mail to receive our weekly newsletter and access Ask-Me-Anything sessions exclusive to our subscribers.

If you prefer to discuss a specific, real world challenge, schedule a 20-minutes consultation call with Maria Alice or one of her business partners.

Looking for Insights on a Specific Topic?

You can navigate between categories on the top of the page, go to the Insights page to see all articles and navigate across all pages, or use the box below to look for your topic of interest.

bottom of page