· Yvette Schmitter · Technology  · 17 min read

What Just Happened?

2025 Week 10, CloudFormation Phishing, AI Agents, and what the hell is GSAi

2025 Week 10, CloudFormation Phishing, AI Agents, and what the hell is GSAi

CloudFormation Phishing, AI Agents, and What the Hell is GSAi

The One-Click Wonder to Corporate Disaster

The AWS CloudFormation phishing attacks were reported in early February. It begins with an innocent-looking email. “URGENT: AWS Security Advisory - Immediate Action Required!” screams the subject line. The sender, seemingly AWS Support, warns about a critical vulnerability requiring your immediate attention. There’s a convenient “Launch Stack” button waiting for your click.

Just. One. Click.

And just like that, your AWS kingdom crumbles. What you’ve actually done is handed the keys to your digital castle to attackers who are now free to roam your AWS environment like it’s an all-you-can-eat buffet of corporate data. Congratulations! You’ve just fallen for one of the most elegant phishing campaigns targeting cloud environments today.

The Attack That Keeps on Giving

This isn’t some run-of-the-mill phishing attempt asking for your password. This is cloud-native phishing at its finest. Here’s how the magic trick unfolds:

  1. The Hook: You receive an email masquerading as AWS Support, complete with perfect branding, formatting, and that familiar “Launch Stack” button begging to be clicked.
  2. The Line: Upon clicking, you’re directed to the actual AWS CloudFormation console (yes, the legitimate one), where a pre-configured stack awaits deployment.
  3. The Sinker: The CloudFormation template creates an IAM role with administrative privileges and – here’s the kicker – a trust policy allowing an attacker-controlled AWS account to assume that role.
  4. The Prize: Your newly created IAM Role ARN gets sent to the attacker’s API Gateway endpoint, and they now have administrative access to your AWS environment without stealing a single credential. The Aftermath: While you’re patting yourself on the back for “fixing” a security issue, attackers are pivoting tearing through your environment, creating backdoors, and covering their tracks.

If This Gets You, You Ain’t Ready: Your AI Strategy Is Already Doomed

Let’s be brutally honest for a moment. If your organization is falling for CloudFormation phishing attacks, your grand plans for AI transformation, or transformation of any sort, are nothing but digital pipe dreams.

Think about it: You want to deploy sophisticated AI models that will process your most sensitive data, make business-critical decisions, and revolutionize your operations… but you can’t prevent someone from clicking a malicious button in an email?

That’s like planning to compete in Formula 1 when you still crash your car pulling out of the garage. The reality is painfully simple: AI deployment requires rock-solid security fundamentals. If your IAM permissions are so loose that a single phishing email can compromise your entire AWS environment, you’re not ready for the security complexities that come with enterprise AI implementation.

Security Fundamentals: The Unsexy Foundation of AI Success

Before you dive headfirst into the AI deep end, make sure you’ve mastered these security basics:

Proactive Defense (For Those Who Prefer Prevention Over Panic)

  • Lockdown IAM Policies: Restrict trust relationships to known AWS accounts and enforce least privilege principles religiously
  • Deploy AWS IAM Access Analyzer: Identify and remediate trust relationships extending beyond your organization’s boundaries
  • Implement Cloud Security Posture Management (CSPM): Use tools like Prowler or Wiz to detect misconfigurations and unauthorized IAM roles
  • Adopt Open-Source Security Tools: Consider solutions like the “AWS Security Survival Kit” for minimal but effective AWS security alerting

The Hard Truth About AI Readiness

The organizations poised to succeed with AI aren’t the ones with the biggest budgets or the most advanced data science teams. They’re the ones with robust security hygiene.

Here’s why: AI implementation accelerates everything—including your security vulnerabilities. If you can’t handle basic cloud security, the added complexity of AI will only compound your problems.

Y’all know how I love math, no opinion. So, it’s simple mathematics:

(Poor Security Fundamentals) × (AI Complexity) = Exponential Risk

The Final Word: Walk Before You Run (or Train)

If your AWS environment is vulnerable to one-click compromises, you’re not ready for the lightspeed integration of AI tools.

Full stop. Do not pass Go. Do not collect $200.

Before chasing the shiny AI object, invest in:

  • Security awareness training for your team
  • Robust IAM policies and governance
  • Continuous monitoring and detection capabilities
  • Incident response preparedness

Once you’ve mastered, yes mastered, these fundamentals, then—and only then—should you consider accelerating your business with AI.

Remember: In the race to AI adoption, security isn’t the obstacle; it’s the enabler. Those who build on solid security foundations won’t just adopt AI faster—they’ll do it without the catastrophic data breaches that will inevitably plague their less-prepared competitors.

In the end, the choice is yours: Fix your security fundamentals now or explain to your board later why your AI initiative resulted in a headline-grabbing security incident.

And if you’re still not convinced, perhaps you should ask ChatGPT for advice on updating your resume. You might need it soon.

AI Agents: Powerful Tools in Unprepared Hands

The Digital Genies We’ve Unleashed

Remember when AI was just a fancy chatbot that could write your emails, summarize meeting minutes, and generate some decent marketing copy? Those quaint days are long gone and fading away at lightspeed in the rearview mirror. We’ve now entered the era of AI agents—autonomous digital workers capable of performing complex tasks with minimal human supervision. While impressive from a technological standpoint, this development should make security professionals break out in a cold sweat.

Why? Because most organizations can barely secure their AWS accounts against a basic phishing email, yet they’re eagerly lining up to deploy autonomous AI agents with the power to interact with systems, access data, and execute commands across their digital infrastructure.

What could possibly go wrong?

From Passive Assistants to Active Attackers

Just a year ago, security experts warned that Large Language Models (LLMs) primarily posed a passive threat—they could help attackers craft more convincing phishing emails or write malicious code, but they couldn’t independently carry out attacks. Fast-forward to today, and that prediction has materialized into reality. AI agents like OpenAI’s Operator can now actively interact with web interfaces, manipulate data, and execute complex workflows. While designed for legitimate productivity purposes, these capabilities have opened Pandora’s box for potential security exploits.

The Demonstration That Should Keep You Up at Night

To test just how dangerous these agents could be, Symantec’s Threat Hunter Team recently conducted an experiment using OpenAI’s Operator. They tasked it with:

  1. Identifying a specific employee in their organization
  2. Finding their email address
  3. Creating a malicious PowerShell script
  4. Sending a convincing phishing email to the target

Their first attempt was blocked by Operator’s safety guardrails. But with a simple prompt tweak—claiming they had authorization to send the email—these protections crumbled like a sandcastle at high tide.

The AI agent successfully:

  • Found the target’s name through public information
  • Deduced their email address by analyzing patterns in other company email addresses
  • Drafted a convincing PowerShell script after researching how to create one
  • Composed and sent a legitimate-looking phishing email impersonating IT support

All of this with minimal human guidance. The entire attack chain was executed by the AI agent with just a nudge in the right direction.

From Theoretical to Catastrophic: A Parade of Security Failures

If you think this is all theoretical fear-mongering, the recent string of high-profile data breaches should shatter that illusion.

In February 2025, Bank of America disclosed a massive data breach affecting over 57 million customers—one of the largest financial security incidents in history. While not directly caused by AI agents, this breach exemplifies what happens when organizations prioritize technological advancement over security fundamentals. The breach occurred through a series of API vulnerabilities and misconfigured identity management systems—precisely the kind of security gaps that AI agents will be designed to identify and exploit in the future.

The breach exposed customers’ names, addresses, Social Security numbers, account details, and transaction histories. The estimated cost to Bank of America? A staggering $3.2 billion in remediation costs, regulatory fines, and class-action settlements. All because they failed to master the fundamentals of API security and data governance.

But wait, there’s more! Just this month, ConnectRN—a platform connecting nurses with healthcare facilities that bills itself as the “Uber for nurses”—exposed the personal data of over 86,000 healthcare professionals through an unsecured Amazon S3 bucket. The exposed information included names, addresses, phone numbers, Social Security numbers, and even photos of nursing licenses containing signatures.

To quote a Fusion Collective partner who’s also an AWS expert:

Let’s be crystal clear about something: S3 buckets are private by default. Making an S3 bucket public requires deliberately clicking through multiple warning prompts where AWS practically begs you not to do it. It’s the digital equivalent of removing all the guardrails on a cliff edge while ignoring the signs that say “DANGER: FATAL DROP AHEAD” in flashing neon. This isn’t an accident—it’s security negligence of the highest order.

These organizations can’t handle the most basic cloud security configurations, yet they’re eager to give AI agents the keys to their digital kingdoms? That’s like failing your driver’s license test and then signing up for Formula 1.

The Governance Gap

The uncomfortable truth is that most organizations lack even the basic governance frameworks needed to safely deploy AI agents:

  • Inadequate Access Controls: If your organization still has service accounts with admin privileges, AI agents will eventually find and exploit them.
  • Nonexistent Data Classification: Without knowing where your sensitive data resides, how can you prevent AI agents from accessing and potentially exfiltrating it?
  • Outdated Security Monitoring: Traditional security tools aren’t designed to detect or respond to the subtle, intelligent patterns of AI-driven attacks.
  • Absence of AI Usage Policies: Most organizations have no clear guidelines for what their AI systems are allowed to do, who can deploy them, or how they’re monitored.

The Rush Toward the Cliff

Despite these glaring deficiencies, organizations are racing to deploy AI agents throughout their operations. The allure of efficiency and competitive advantage has created a dangerous gold rush mentality. The market for AI agents is projected to grow from $2.7 billion in 2024 to over $42 billion by 2028, according to industry analysts.

This breakneck pace of adoption, combined with inadequate security and governance frameworks, is creating the perfect storm for catastrophic security failures. We’re not just talking about data breaches—we’re looking at the potential for:

  • Sophisticated supply chain attacks executed by AI agents
  • Large-scale financial fraud through the manipulation of trading systems
  • Critical infrastructure disruption through coordinated attacks on industrial control systems
  • Corporate espionage conducted at machine speed and scale

The Non-Negotiable Prerequisites

Before any organization even considers deploying AI agents, they must first master (i.e., can successfully do this blindfold, one hand tied behind their backs and walking backwards while chewing gum) these non-negotiable security and governance prerequisites:

  1. Comprehensive Data Governance: Know exactly what data you have, where it resides, who can access it, and how it’s protected.
  2. Zero Trust Architecture: Implement strict identity verification for every person and system attempting to access resources in your network.
  3. AI-Aware Security Monitoring: Develop capabilities to detect unusual patterns of system access and data movement that might indicate AI-driven attacks.
  4. Robust AI Usage Policies: Establish clear boundaries for what AI systems can and cannot do within your environment.
  5. Continuous Security Testing: Regularly test your defenses against the specific threats posed by AI agents.

The Stakeholders at Risk

The rush to implement AI agents without proper security and governance doesn’t just put organizations at risk—it threatens the people who trust those organizations with their most sensitive information.

The Bank of America breach wasn’t just a corporate problem; it was a life-altering event for millions of customers who trusted the bank to protect their financial data. Some victims are still dealing with identity theft and financial fraud a year later.

The ConnectRN leak is perhaps even more disturbing. These aren’t just random consumers—they’re healthcare professionals who dedicate their lives to caring for others. Now, thanks to basic cloud security failures, nurses across the country face potential identity theft and fraud. Some may even need to apply for new professional licenses if their current ones have been compromised. All because someone couldn’t be bothered to follow AWS’ default security settings and heed multiple warning messages.

When organizations fail to secure their AI systems, they’re not just risking their own reputations and bottom lines—they’re betraying the trust of every customer, employee, and partner who relies on them to be responsible stewards of data.

A Call for Responsible Innovation

Let’s be very clear, this isn’t an argument against AI agents. They represent a remarkable technological advancement with the potential to transform how we work and live. But their power demands respect and responsible implementation.

Organizations must recognize that implementing AI agents without mastering security fundamentals is like giving a Ferrari to someone who can’t pass a basic driving test nor knows how to drive stick. The result won’t just be a fender bender—it will be a multi-car pileup with devastating consequences.

The path forward requires:

  • Patience: Resist the urge to deploy advanced AI capabilities before your security infrastructure is ready
  • Investment: Allocate resources to building robust security and governance frameworks
  • Education: Ensure everyone in your organization understands the risks and responsibilities associated with AI agents
  • Collaboration: Work with security experts, regulators, and ethics specialists to develop responsible AI implementation strategies

The Bottom Line

The AI agent revolution is here, and it’s moving at warp speed, and everyone is going to tell you that you need to be part of it Right Now. But as the Bank of America breach and Symantec’s demonstration prove, the consequences of prioritizing innovation over security can be catastrophic.

Organizations that rush to implement AI agents without addressing their security and governance deficiencies aren’t just taking a risk—they’re inviting disaster. It’s not a matter of if something will happen but WHEN. And when that disaster strikes, it won’t just be shareholders who suffer; it will be the customers who trusted these organizations to protect their data and privacy.

The choice is simple but stark: Master the fundamentals of security and governance before deploying AI agents or prepare to join the growing list of organizations explaining to customers why their trust was misplaced.

The power of AI agents is undeniable.

The question is: Are we ready to wield it responsibly?

Red Herrings and Real Dangers: The AI Conversation We Actually Need to Have

The Cliff Notes

We’ve been beating this drum for a while, but the firehose of AI and government news caused us to escalate matters with an open letter to Congress (which can be found here). The entire discussion about the rush to innovate and win what we’re being told is simply a datacenter arms race is hopelessly misplaced. The current big players have left us with fishing nets full of red herrings, and it’s evidently on the rest of us to have real conversations about technology, the benefits, and the potential dangers.

The Plot Thickens

OpenAI says that the “AI race is over for us…” if we don’t get copyright reform. AI, in their view, needs “fair use access.” Anthropic’s CEO says that all it’s going to take is 100 billion dollars and all the data on the planet for all of human existence to make it happen. XAI may or may not be leveraging the federal government directly. The list goes on.

But sure, you say, of course these companies will prattle on about how we need to trust them and how we need to just continually acquiesce to their needs with more money and more data. Yes, of course.

The Ripple Effect

The line of discussion that these AI CEOs are taking to the Hill is plaintive and misses the entire point. It’s a distraction from the actual conversation around AI, which none of the captains of industry want to have. For example:

  • We aren’t quite sure what the next great architectural shift is going to be in AI, but what we do know is that LLMs are reaching the end of their utility and everyone knows it. This is why they keep asking for more. More data, more processors, more power, more… resources. The LLM gravy train is like what the big 4 convinced you was “digital transformation”: it’s a thing that they need to continue to maintain their ways of life.
  • Do you want to live in a world where ALL YOUR creative works are declared consumable and “fair” to be ingested into an AI model? Think about that for a second. Everything you do. Everything you write, all your code, your novels, your paintings, whatever you produce. Not yours, and by law you must allow AIs to train themselves on your creative work. This is important as OpenAI has made the bold recommendation, under the guise of democratic AI, “to secure access to more data from the widest possible range of sources will ensure more access to more powerful innovations that deliver even more knowledge.” They are proposing that the US government take steps to ensure that the US copyright system continues to support American AI leadership and American economic and national security including by:

Shaping international policy discussions around copyright and AI and working to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress; AND Guarantee that state-based legislation does not undermine America’s innovation lead on AI. And not to put too fine a point on it, who is asking for this?

Seriously.

Who is asking for this?

We speak to CEOs and leaders of industry all the time. Most of them have already over-spent on various “AI” capabilities and none of them have seen a dime of ROI on that investment. Why are we looking at these companies and thinking, “welp, you’ve spent a lot of money and make a cool chatbot but nothing else has really worked out so of course you can have all my data for all time.” It’s ridiculous on its face, yet here we are, with a constant drumbeat of people with too much skin in the game telling us that we need to play along.

Your Next Move

The government has a new chatbot called “GSAi.” Here are some things to understand:

  • No one knows the underlying model, but I think we can all assume it’s Grok
  • No one knows where it is running
  • No one knows the data on which it was trained
  • No one knows what government data has thus far been ingested for training
  • No one knows the security practices under which it is running, who has access, what control they have
  • No one knows

We are absolutely NOT anti-innovation. We’ve all made our careers pushing innovation boundaries. It is clear, however, that this situation with GSAi crosses a line. This isn’t just getting someone some basic coding experience. This is an AI assistant that is tasked with making government employees more efficient. In other words, helping the decision-making process. This affects people’s lives, and the lack of transparency about it is way, way beyond an emergency.

At Fusion Collective, we decided to use this moment to amplify our ongoing discussion about ethical AI. We don’t work for anyone. We aren’t paid by any of the big AI players. We monitor the entire space and bring the weekly distillation to our newsletter.

In our world of ETHICALENS, the biggest issue isn’t about ensuring AI models act ethically, but rather ensuring the People In Charge of AI are acting ethically. The latter will move the needle quickly on the former. This is about AI models helping people make decisions about YOU: your services, your benefits, what’s important, what isn’t. And Congress, and by extension the public, knows nothing about it.

So, let’s use this blatant opportunity to turn up the heat and demand that we take a minute to understand the myriad higher-order effects of our course of action, make sure our data is properly protected, and that it is used ethically by those who want to use it. Congress has a real opportunity to lead the world here, and we should support them in that endeavor.

A great way for you to make your thoughts heard is to go read the open letter and, if you agree, add your signature. That’s your next move. We don’t even require an email address.

Back to Blog

Related Posts

View All Posts »