Synthetic Controls & Beyond: Your Questions on Single-Case Causal Inference
- Maria Alice Maia

- Mar 31
- 3 min read
The response to last week's post on the Synthetic Control Method was incredible. The questions flooding my inbox and past week's consultation calls tell me one thing: we're tired of guessing the ROI of our biggest bets, and we're ready for a more rigorous path forward.
I don't have time for jargon or overly academic-speak. My entire career, from scaling Alura to driving growth at Stone and Itaú, has been about translating complex ideas into tangible value.
Let's do that now. Here are my no-nonsense answers to your top questions.

Q: "This sounds great, but complex. What is the single most important thing I need to have before I can even thinkabout using Synthetic Controls?"
A: Data. But not just any data. You need a long pre-intervention time series. The entire credibility of a synthetic control rests on its ability to prove it could have perfectly mimicked your company, state, or store for many years before your big intervention. A few quarters of data isn't enough. Without a long, stable pre-period to establish a convincing match, your "synthetic twin" is just a statistical ghost.
Q: "What if my company is a total outlier? Can we still create a 'synthetic twin' if we are already number one in the market?"
A: Excellent and critical question. No. The method is based on interpolation—it creates a weighted average of units in your "donor pool". If your treated unit's characteristics (e.g., pre-intervention sales) are outside the range of the control units, the model can't build a match. This is known as the "convex hull" requirement. If you are an extreme case, you cannot be synthesized from a combination of others. This is a crucial feasibility check you must do upfront.
Q: "What if the pre-intervention fit isn't perfect? Is the method useless?"
A: Not necessarily. While a large discrepancy is a major red flag for bias, recent extensions to the method can help. Advanced techniques can perform a bias correction, essentially using a regression model to adjust for the small differences that remain between your unit and its synthetic twin. It’s a way to improve the estimate when the match is close but not quite perfect.
Q: "How do we know the results are real and not just a fluke of the model?"
A: You test it until it breaks. The two most powerful robustness checks are:
In-Time Placebo Tests (Backdating): Move the intervention date back in time in your analysis (e.g., tell the model the change happened in 2022 when it really happened in 2024). If the model finds a fake "effect" in 2022, you know it's unreliable. If the gap only appears after the true 2024 date, you can be much more confident in your result.
Leave-One-Out Analysis: If your synthetic Germany is made of Austria, Japan, and the US, re-run the analysis three times, leaving each of those countries out of the donor pool one at a time. If the results remain consistent, they are robust. If one country's removal dramatically changes the outcome, your finding is fragile.
My goal is to demystify these powerful tools because this knowledge should not be locked away in academic journals. It belongs in the hands of leaders and builders. It's not mine to keep; it's a toolkit for us to share.
If you are ready to move from flimsy excuses to rigorous proof, join my private email list for more no-nonsense insights.
And if you have a complex, real-world case you’re wrestling with, let's connect. I've opened up a few more 20-minute consultation slots. Let’s find the ground truth, together.


