Causal ML Unpacked: Your Questions on Bringing Rigor to AI
- Maria Alice Maia

- Jul 7
- 2 min read

Your ML model is telling you a beautiful story. The problem? It's probably fiction.
You have a model that predicts customer churn with 90% accuracy. Amazing. You have another that forecasts sales based on marketing spend. Fantastic.
Now for the hard question: What do you DO with that information?
This is where I see brilliant teams fall flat. They build a powerful predictive engine and then treat it like a causal oracle. They see that "discounts" are a top feature in their churn model, so they launch a massive discount campaign. The result? Margins get crushed, and churn barely budges.
Why? Because they confused prediction with causation. The model didn't tell you discounts cause customers to stay; it told you they are associated with staying. The "why" is a completely different, and infinitely more valuable, question.
This is the most dangerous form of "Doing Data Wrong" in the AI era. It's using high-tech tools to make low-rigor decisions. It’s like having a Formula 1 car and only driving it to the grocery store.
The intersection of Machine Learning and Causal Inference is the most critical frontier in data science today, and the confusion is palpable. So, let's clear the air.
I’m dedicating my next session to this. No slides, no long lecture. Just a raw, no-nonsense Q&A to tackle your real-world questions. Let's meet on June, 10th at 10a.m. PST/ 1p.m. EST. If you already subscribe to our newsletter, you'll receive the link tomorrow night. If not, you can subscribe to receive it. This time, it'll be a 90-minutes call to try to tacke all questions raised.
Causal ML Unpacked: Your Questions on Bringing Rigor to AI
I'm here to answer the questions that should be keeping you up at night:
When is a standard ML model (like XGBoost) good enough, and when do I absolutely need a causal method (like Causal Forests)?
My team uses SHAP values to explain our models. Isn't that enough to understand feature impact? (Spoiler: No, it's not.)
What's the first, most practical step to introduce causal thinking into our ML workflow without boiling the ocean?
How do I, as a manager, challenge my data science team to ensure their models can be used for strategic decision-making, not just prediction?
This is your chance to bring your toughest challenges. My knowledge is not mine to keep—it’s a tool for all of us to wield. Let's cut through the hype and get to the heart of what actually works.
Passionate about this? Join my email list for more research-backed insights to fix broken data practices. Let's build a community that refuses to settle for fiction. If you have a specific, real-world case you're wrestling with, schedule a 20-minute, no-nonsense consultation call with me.


