From Research to ROI: New Method: Regression Discontinuity in Time (RDiT)
- Maria Alice Maia

- Jan 20
- 3 min read
A new regulation forced you to change your product's packaging on January 1st. Sales dropped 5%. Do you blame the new design, or do you blame... January?
This is a classic problem for a Consumer Goods company, and a place where a naive "before-and-after" analysis can lead to disastrous conclusions. Attributing a post-holiday slump to your new packaging is a textbook "Doing Data Wrong" error.
When a policy or an event impacts everyone at a single, known point in time, and you have no clean control group, you can't use a simple comparison. The right tool for this job is often the Regression Discontinuity in Time (RDiT) design.
The logic is intuitive: instead of just comparing December to January, you use the data trending towards the January 1st cutoff to predict what would have happened without the change. The causal effect is the sharp "jump" or "break" from that trend at the exact moment the new regulation hits.
But here’s where expertise matters. RDiT is a special case of RDD, and as research by Hausman & Rapson makes clear, it comes with its own unique set of pitfalls that many practitioners ignore.

RDiT is NOT a standard RDD. Here’s what you need to watch out for:
You Can't Test for "Sorting": In a standard RDD, we can check if people are manipulating their score to get just above the cutoff. But with RDiT, the "running variable" is time. No one can manipulate what day it is. This sounds good, but it means a key validity test for standard RDD is off the table. You lose a layer of certainty.
Time Series Dynamics are a Minefield: Your sales data has momentum (autoregression). Yesterday's sales influence today's. A simple RDD model that ignores this time-series nature can produce biased estimates of the long-run effect. The effect you measure might be just the immediate shock, not the new steady state.
Global Polynomials Can Overfit: RDiT analyses often use high-frequency data over long periods (e.g., years of daily sales) to get enough power. To control for seasonality, analysts often fit high-order polynomials to the time trend. The danger, as shown by Gelman & Imbens, is that these flexible polynomials can "overfit" the data, attributing jumps to your policy change that are actually caused by other, unrelated shocks in that long time window.
As a leader and consultant, my job is often to push back on an analysis that looks right on the surface but is built on a shaky foundation. An RDiT can be a powerful tool, but it requires a much higher burden of proof than a standard RDD. It's closer to a sophisticated event study, and it demands rigorous sensitivity analysis.
My mission is to translate these critical nuances from the academic frontier to business practice. Knowing the difference between an RDD and an RDiT isn't just a technical detail; it's what separates a credible analysis from a misleading one. This knowledge is not mine to keep.
If you’re ready to move beyond simplistic analyses and embrace the rigor required to understand true causal impact, join my movement. Subscribe to my email list.
And if you’re trying to measure the impact of a time-based event and are worried about these pitfalls, book a 20-minute, no-nonsense consultation with me. Let’s stress-test your approach.


