· Yvette Schmitter · Technology · 10 min read
What Just Happened?
2025 Week 7, AI Regulation, and the Paris Summit

The AI Regulation Showdown: EU’s Grand Design vs. USA’s State-by-State Scramble
The Cliff Notes
Two weeks ago, the EU dropped what might be the most comprehensive AI regulation blueprint we’ve ever seen - a 137-page masterpiece that categorizes AI systems into four risk levels: unacceptable (banned), high risk (heavily regulated), limited risk (transparency required), and minimal risk (carry on). Meanwhile, across the Atlantic, more than a dozen U.S. states are crafting their own AI regulations, focusing primarily on algorithmic discrimination.
Think of the EU’s approach as a meticulously planned family dinner - everyone has their assigned seat, food preferences accounted for, and clear expectations about appropriate dinner table behavior. The U.S. approach? It’s more like a potluck where every state brings their own dish - some arrive with elaborate casseroles, others with store-bought cookies, and a few just show up with paper plates. States like Colorado, Connecticut, and Texas are bringing their signature dishes, but no one’s quite sure if there’s gonna be trouble over the gravy.
The Plot Thickens
But guess what? While the EU is building a gated community with strict homeowners’ association rules, U.S. states are developing their own neighborhoods, each with their own unique character and guidelines. Some have security cameras on every corner, others have weekly neighborhood watches, and a few seem content with just locking their doors at night.
As shared in last week’s newsletter, the EU’s approach includes specific prohibitions on:
- Social scoring systems (because we’ve all seen enough dystopian movies)
- Real-time biometric surveillance (your face is yours to keep)
- Manipulative AI systems targeting vulnerable populations
- Emotion recognition in workplaces and educational institutions (your Resting Work Face is safe)
Meanwhile, U.S. state regulations primarily focus on:
- Algorithmic discrimination in automated decision systems
- Transparency requirements
- Risk management plans
- Impact assessments
And honestly, both approaches have their share of wishful thinking. The EU’s belief that every AI system can be neatly categorized is like expecting teenagers to keep their rooms organized year-round. And the U.S. states’ assumption that they can effectively regulate AI independently? That’s like trying to go tornado hunting on a bicycle, and no, it’s not electric.
The Ripple Effect
Let me tell you - this regulatory divergence isn’t just another news story to scroll past. It’s creating waves that will touch every single one of us, from the tech titans in Silicon Valley to your local business owner trying to automate their customer service.
Think of it this way: if the internet is our global neighborhood, we’re suddenly dealing with two very different sets of rules for keeping the peace. And folks, that matters because:
Global Business Impact: Companies are now like those children living between divorced parents with different household rules. They’ll need to juggle both sets of expectations, and we all know that’s neither simple nor inexpensive.
Innovation Implications: We’re essentially creating two different playgrounds - one with carefully padded surfaces and safety inspectors (EU), and another with varying equipment depending on which state you’re in (US). And when you give people choices between one spot with rules vs. another where you can basically do what you want, yea we know where everybody is going to go.
Consumer Protection: Your digital rights will literally depend on your zip code - like having different return policies at different stores, except these policies govern your fundamental rights in the AI age. And this is disastrous with the digital divide because we know for a fact that there are zip codes in the US that are food desserts, food swamps and below bar broadband infrastructure. This exacerbates an already bad situation and in essence deepens the divide.
Market Access: The companies that master this regulatory dance - and let’s be clear, it’s more complex than a TikTok dance challenge - will emerge as the leaders in the global AI market.
Your Next Move
Now, let’s get real about what you need to do, because sitting on the sidelines isn’t an option anymore. This is your moment to step into your power and take action.
For Business Leaders:
- Start mapping your AI systems - every detail matters
- Invest in compliance infrastructure now, because the only thing that ages well is wine
- Consider geographically segmented AI deployment strategies (yes, like Netflix, but for your AI systems)
- Build relationships with regulators
For Technology Teams:
- Design AI systems with compliance in mind from the start. You’ve heard the wood working saying, “measure twice, cut once.”
- Document everything
- Create flexible architectures that can adapt to different regulatory requirements
For Policy Makers/Professionals:
- Engage in public consultations like you’re planning the community’s future, because you are
- Monitor regulatory developments
- Advocate for harmonization where possible, because nobody needs 50 different recipes for the same dish
For Everyone Else:
- Stay informed about AI regulations in your jurisdiction just like you follow your local weather forecast before heading out for the day
- Understand your rights under different regulatory frameworks - they’re as important as knowing your warranty rights or return policies at your favorite stores, but with much bigger implications
- Engage in public discussions about AI governance - your voice matters more than you think
The AI regulation landscape is shifting faster than fashion trends, and the time to prepare isn’t tomorrow - it’s right now. Whether you’re team EU “Everything in Its Place” or team US “Choose Your Own Adventure,” standing still isn’t an option. It’s likely that, in the short term, companies will default to the most stringent regulatory frameworks by default just to avoid too much additional work. We just hope that this doesn’t end up like another GDPR where we all get to click on cookie acceptance no matter where you are.
So, what’s your perspective on these regulatory approaches? Are you betting on the EU’s gated community approach or the U.S.’s neighborhood-by-neighborhood experiment? Share your thoughts below - because this conversation is too important to leave to the algorithms alone.
The Paris Paradox: Global AI Summit’s Unity and Division
The Cliff Notes
The Paris AI Action Summit marked a pivotal shift in the global AI governance conversation, moving from theoretical safety concerns to practical implementation strategies. France showcased its commitment by launching INESIA (French Institute for AI Evaluation and Security) - a national institute dedicated to evaluating AI systems, conducting safety research, and developing technical tools to mitigate AI-related risks. Led by the General Secretariat for Defense and National Security, INESIA joins a growing network of AI Safety Institutes now active in ten countries, forming a collaborative global network headquartered in San Francisco.
The Plot Thickens
While 60 countries signed the summit’s declaration on ensuring AI development should be inclusive, open, ethical and safe, notable abstentions from several major AI-developing nations – the United States and United Kingdom – cast a shadow over the proceedings and the underscored the deep fundamental disagreement about regulation vs. oversight. The UK cited national security concerns, while the U.S. warned that excessive regulation would stifle innovation. This divide exposed a fundamental rift in approaches to AI governance, with experts like David Leslie from The Alan Turing Institute noting that the declaration failed to adequately address “real-world risks and harms.”
The summit’s timing, coincided with the launch of the Hiroshima Process Code of Conduct, exposed the complex dynamics of international AI collaboration. European nations, led by France, pushed for structured oversight through the AI Act, while the U.S. advocated for a more market-driven approach—thus creating tension in the global race for AI dominance.
The Ripple Effect
The summit’s outcomes reveal a deeply concerning and fragmented global approach to AI governance. Anthropic CEO Dario Amodei’s disappointment in the summit’s failure to address artificial general intelligence (AGI) risks, coupled with MIT physicist Max Tegmark’s criticism of the declaration’s weakness, suggests a Grand Canyon sized growing divide between industry experts’ concerns and governmental approaches. The establishment of INESIA and similar institutes worldwide creates a new network for AI safety evaluation, potentially setting regional rather than global standards. The French Language Model Leaderboard initiative further emphasizes this trend toward regional AI specialization.
The divide between signatories and non-signatories of the summit’s declaration may lead to a two-speed development of AI governance: one adhering to strict regulatory frameworks and another following a more deregulated whoever-does-it-first-wins approach. This split could affect everything from AI development practices to international technology transfers and market access.
Your Next Move
For organizations, technologists, and individuals navigating this increasingly complex AI landscape:
- Monitor the evolving regulatory landscape, particularly the implementation of the EU’s AI Act and its potential conflicts with other regional approaches
- Assess how regional AI development and differing regulatory frameworks might impact your global operations
- Prepare for possible regulatory divergence between major AI markets
- Consider the implications of regional AI safety standards for your technology development and deployment strategies
The summit revealed that while the world agrees on AI’s transformative potential, there’s significant disagreement on how to harness it safely and effectively. For instance, without a shared understanding of terms like “sustainable” and “inclusive,” even well-intentioned declarations risk becoming performative and one note platitudes. As the torch passes to India for the next summit, the global community faces the challenge of moving beyond words to meaningful action in AI governance. This discord may shape the future of AI development, creating both challenges and opportunities for those ready to navigate this complex landscape.
Leaders must:
- Take ownership of AI governance within their organizations, developing robust internal frameworks that prioritize safety, ethics, and inclusivity
- Actively monitor and adapt to the evolving regulatory landscape across different regions
- Invest in AI safety research and development, even in the absence of mandatory requirements
- Build cross-border partnerships that promote responsible AI development, transcending the current regulatory divide
- Cultivate an organizational culture that prioritizes ethical AI development and deployment
The message from Paris is unequivocal: waiting for perfect global consensus on AI regulation is not an option. Organizations must lead by example, implementing responsible AI practices that protect both innovation and human interests. In this critical moment of technological transformation, there is no middle ground – you’re either shaping the future of AI or risking being shaped by it.
The next chapter in AI governance will be written by those who act, not those who wait. As the focus shifts to India for the next summit, the global community must move beyond declarations to decisive action. The future of AI is being determined now, and everyone has a role to play in ensuring it serves humanity’s best interests.
The Bottom Line: The stark reality emerging from the Paris AI Action Summit is clear: in the absence of unified global regulation, the responsibility for ethical AI development falls squarely on organizations and leaders. The split between the 60 nations that signed the declaration, and the notable abstention of the U.S. and UK, creates a critical inflection point in AI governance. Unlike cryptography, AI can’t be controlled by “simple” export control legislation, and it is going to take the governments of the world time to get there. That’s time that companies simply do not have.
This is not merely a policy discussion – it’s a call to action. Organizations cannot afford to wait for regulatory frameworks to catch up with technological advancement. The pace of AI development, exemplified by breakthroughs like DeepSeek and the evolving landscape of AI safety concerns, demands immediate and proactive engagement.