You asked to be forgotten. We deleted your data. But did our AI really forget you?
- Maria Alice Maia

- Aug 1
- 3 min read

This is not a philosophical question. It’s a ticking time bomb of liability and broken trust that most companies are sitting on. There’s a massive gap between what we promise customers about their “right to be forgotten” and what our technology can actually deliver.
Let’s call this what it is: "Doing Data Wrong" at the highest level.
We fall into two traps:
Kindergarten Data Thinking: We delete a user's row from a database and naively assume the AI trained on that data has developed amnesia. It hasn’t. The patterns, the influence, of that user are baked into the model's DNA.
Tech-for-Tech’s Sake: We chase the holy grail of "Machine Unlearning," trying to force a model to surgically remove every trace of a user's influence, as if it had never seen their data at all. This is a technical and theoretical nightmare—incredibly expensive, nearly impossible to execute perfectly, and even harder to audit and prove.
Both paths lead to the same destination: a breach of trust and legal exposure.
There’s a better way. It’s about shifting our goal from perfect amnesia to pragmatic silence. The concept we need to embrace is Output Suppression.
Instead of trying to give our AI a lobotomy, we simply prevent it from ever speaking about or acting on the data of the user who wants to be forgotten. It's a targeted, auditable, and technically achievable solution.
Here's what this looks like in the real world:
The Scenario: A major travel company uses a sophisticated AI to personalize holiday package recommendations. A long-time customer, who has booked dozens of trips, invokes their right to data deletion under LGPD/GDPR.
The WRONG Way: The tech team deletes the user's profile and hopes for the best. But the AI, having learned from the user's extensive travel history (e.g., "people who book ski trips to Aspen also like summer trips to Patagonia"), continues to use those learned patterns to target other customers who look just like the one who left. The forgotten user’s data is still creating value, and their behavioral ghost haunts your campaigns. This is a clear violation of the spirit, if not the letter, of the law.
The RIGHT Way (with Output Suppression): The user's ID is flagged in an exclusion list. The AI is architected with a hard rule: if any recommendation is generated based on the patterns of this user ID, it is suppressed. The system is forbidden from acting on the memory of that user. It's not that the AI has forgotten; it’s that it has been gagged. This we can prove. This we can audit. This is how we build trust.
So, what do we do now?
Managers & Leaders: Stop promising "perfect deletion." Start asking your tech teams a much smarter question: "How are we implementing Output Suppression, and can you show me the audit log that proves our AI is no longer using a forgotten user's data in any of its outputs?" This moves the goal from impossible purity to robust, defensible compliance.
Tech & Data Professionals: Let's stop chasing the ghost of perfect "unlearning." Our brilliance is better spent architecting elegant, reliable, and scalable Output Suppression systems. This isn’t a lesser technical challenge; it’s the right one. It solves the actual business problem: honoring user rights while maintaining operational integrity.
My entire career, from scaling major corporations to building a company from scratch, has taught me that the most elegant solutions are often the most pragmatic. This knowledge isn’t mine to keep. It’s for us to use, to fix what's broken, and to build data practices that are both powerful and responsible.
If you’re a leader or a builder who refuses to settle for "good enough" and wants to get this right, join my private email list. I'm sharing no-nonsense, research-backed insights to help us all unlock real data value. Let's build a community that doesn't just talk about data, but gets it right.


