Managers: When to Ask for an RD (Regression Discontinuity) Analysis, and When It's Just 'Tech-for-Tech' Overkill
- Maria Alice Maia

- Jan 13
- 3 min read
Your data science team just proposed a "Regression Discontinuity Design" to analyze your new sales program. It sounds impressive. It sounds rigorous.
But is it the right tool for the job, or just a fancy solution in search of a problem?

This is a critical question for leaders. One of the most subtle ways data projects fail is "Tech-for-Tech's Sake"—when a complex method is misapplied to a business problem that doesn't fit it. This is especially common in Consulting, where the desire to bring sophisticated methods to a client can sometimes outrun the practical realities of the data.
An RDD can be incredibly powerful, but only under specific conditions. As a manager, you don’t need to know the math, but you absolutely need to know the three questions to ask to see if it’s even a possibility.
The Manager's RDD Litmus Test:
Before you greenlight an RDD analysis, ask your team these three questions:
"Is there one, single, observable 'Running Variable' that assigns the treatment?" There must be a specific metric—like a customer's exact spending, a precise test score, or a credit rating—that determines who gets the treatment. If the rule is "at the manager's discretion" or based on a fuzzy concept like "high engagement," RDD is not the right tool.
"Is there a sharp, deterministic cutoff?" The rule must act like a light switch, not a dimmer. Does everyone with a score of >=700 get the loan, and everyone with <700 does not? If the probability of treatment just gradually increases around the cutoff, you have a "Fuzzy RDD," which requires different assumptions and a more complex analysis. If there's no clear cutoff at all, RDD is off the table.
"Can people precisely manipulate their score to get just over the line?" The magic of RDD comes from the assumption that it's "as-if random" right around the cutoff. If your salespeople can easily offer a small discount to get a client just over a spending threshold to qualify for a bonus, that "randomness" is gone. The design is likely invalid because of this sorting.
How to Use the Answers:
If your team answers "yes" to all three, you have a natural experiment hiding in your business. An RDD is likely the perfect tool to get a clean, credible causal estimate.
If the answer to any of these is "no," then pushing for an RDD is a waste of time and talent. Your team should be looking at other methods—like Difference-in-Differences, Propensity Score Matching, or a simple controlled experiment.
As a consultant at FALCONI and a leader at companies like Ambev and Stone, my job was always to find the simplest path to the most credible answer. Sometimes that path is a sophisticated method like RDD. But true leadership is having the discipline to recognize when it's not, and guiding your team to the right tool for the job.
My mission is to bridge this gap between business leaders and technical teams. This knowledge is not mine to keep.
If you’re ready to move beyond the buzzwords and learn how to ask the questions that lead to real business value, join my movement. Subscribe to my email list.
And if you’re trying to decide on the right analytical approach for a major project, book a 20-minute, no-nonsense consultation with me. Let's find the right tool together.


