Is the EU AI Act Working? A Look at the Early Data on Innovation and Trust
- Maria Alice Maia

- Apr 14
- 4 min read
One year in, the central debate around the EU’s landmark AI Act rages on, framed by a simple, but dangerously misleading, question: is it a ‘brake’ on innovation or a ‘booster’ for trust?
This framing is a false dichotomy. It pits speed against safety, progress against prudence. From my experience building tech companies and leading data strategy at firms like Ambev and Itaú, I’ve learned that the most resilient innovations are not born from unchecked velocity. They are forged in environments of purposeful growth.
The question is not if we should regulate, but how regulation shapes the kind of innovation we get. The early signals from Europe suggest the AI Act is not a brake. It is a filter, and it is working. It’s actively redirecting the firehose of capital and talent away from innovation based on meaningless volume and towards a far more valuable prize: innovation based on trust.

Europe's "Third Way": Regulation as a Market-Making Force
The global AI landscape has been defined by a bipolar race. On one side, the US model of massive, venture-capital-fueled innovation in a largely permissive regulatory environment. On the other, the state-driven, strategically directed approach of China. Europe, navigating this "Innovation Trilemma," has deliberately chosen a "third way".
This strategy follows a strategic insight: in a world anxious about AI's disruptive power,
trust is a competitive advantage. By establishing the world's first comprehensive legal framework for AI, Europe is not just managing risk, but forging a market: an ecosystem where AI systems are verifiably safe, transparent, and ethical. EU regulations can become a global standard in regulation and innovation for the world’s most transformative technology.
This isn’t about slowing down. It’s about building a different kind of engine. This strong framework prevents a race to the bottom, where citizens’ data becomes raw material for opaque global algorithms we don’t control—a model that turns nations into digital colonies. It gives companies a passport to operate in the "gold standard" market for privacy and security, meaning a premium price. Markets pay more for credibility and security, the core assets of the digital economy.
Early Indicators: A Shift in Corporate Behavior
While it is still early, we are seeing the first signs of this market taking shape. This is less about the quantity of new AI startups and more about the quality and nature of the activity.
First, the compliance and assurance sector is booming. The Big Four accounting firms are already developing AI audit services to verify the safety and effectiveness of AI systems. This is not a cost center; it's the birth of a new, high-value industry dedicated to certifying trust.
Second, we're seeing a strategic shift in corporate investment. The AI Act’s risk-based approach—from "unacceptable risk" systems that are banned, to "high-risk" systems that demand rigorous oversight —is forcing companies to make conscious choices. They are no longer just asking "Can we build it?" but "Should we build it?" and "How do we build it to be compliant and trustworthy?" This directs R&D spending towards robustness, fairness, and transparency.
Finally, the Act is influencing global talent flows and investments. While the "talent gold rush" continues , we see firms like Nvidia planning dozens of AI data centers across Europe and investing in local champions like France's Mistral AI. This is not happening in a vacuum; it’s happening in the context of the world's clearest regulatory framework, creating a stable and predictable environment for long-term investment.
The Real Challenge: Measuring What Matters
The ultimate test of the AI Act, however, cannot be measured in headlines or anecdotes. Its true impact lies in quantifiable changes in corporate and societal outcomes. The biggest challenge now is to move beyond punditry and into rigorous, causal analysis.
Here, Europe's regulatory strategy has created an unprecedented opportunity for researchers. The staggered, multi-year implementation of the AI Act is a research design feature. It has turned the entire continent into a massive policy sandbox, a series of natural experiments waiting to be analyzed.
We must use this opportunity to ask the hard questions and measure the answers with scientific discipline. For instance, what is the causal impact of a "high-risk" designation on a firm’s innovation, measured not by press releases, but by patent filings and R&D spending? We can answer this with quasi-experimental methods like Difference-in-Differences, comparing designated firms to a control group of similar firms in lower-risk sectors before and after the regulations take effect.
This is the level of rigor we need. My entire professional and academic life has been dedicated to using data to find the ground truth. The knowledge I've gathered from places like FGV, UC Berkeley, and HEC Paris isn't mine to keep. It's a toolkit for solving real-world problems.
The EU AI Act is one of the most important socio-technical interventions of our lifetime. Let's stop debating what it might do and start rigorously measuring what it is doing.
If you're a leader who believes in moving from guesswork to evidence, I invite you to join my private email list for more no-nonsense, research-backed insights.
And if you have a complex, real-world case you’re wrestling with at the intersection of data, technology, and strategy, let's connect. I've opened up a few 20-minute consultation slots.
Let’s find the ground truth, together.


