Dear Distinguished Members of Congress,
We hope this letter finds you well--or at least finds you at all, unlike any concrete information surrounding the Department of Government Efficiency's new AI chatbot, GSAi.
We are researchers and practitioners in the field of AI ethics, bias, and security. We do not do work for OpenAI, Anthropic, XAi, or any others that are vying for your attention. We monitor the state of the entire AI space, and we have no favorites. As such, we do have significant concerns that must be addressed immediately. AI can be a tool for good--but to make it so requires deliberate attention to detail. AI tools, left unchecked and without oversight, amplify training bias and reinforce poor quality decision making. In short, they can often harm as much as they help.
Government efficiency is a noble goal. However, the recent Wired article about DOGE's custom chatbot deployment to 1,500 federal workers raises more red flags than a Soviet parade. We are not "crying wolf" from a position of ideology.
AI is not a bell that is easily un-rung, and you've got one shot to do this right. If you don't, trust in emerging technology will wane and the US will fall farther behind our global competitors.
Specifically, we encourage you to ask these questions in an open hearing:
- Where exactly is GSAi running? Is it frolicking in some government-secured data center, or perhaps enjoying the hospitality of a third-party cloud service with questionable security practices or shared responsibility model that makes security a "YOU" problem especially if the right protocols are not secured? The digital residence of a system with potential access to sensitive government information seems like a detail worth clarifying. There have been notably 4 data breaches just this week impacting over half a million people, including (3/12/2025) Bank Of America's notification of its data breach. Over 82% security incidences are a result of misconfigurations and human error, including X's own denial-of-service vulnerabilities also displayed this week. Data breaches and security incidences are overwhelmingly a result of misconfigurations and human error (read that again), which means that the citizens of this country need assurances, and fast, that their data is being handled with the kid gloves it deserves.
- What information has been ingested by this AI system? Has it consumed classified documents, personal employee data, or sensitive policy discussions? Is it dining on a carefully curated dataset, or has it been allowed to gorge itself at the all-you-can-eat buffet of federal information? AI is not magic. Current AI systems provide likely answers based on their training. Unlike humans who can imagine new wonders from whole cloth, an AI system's output is purely a function of its education. More worrisome is that even the state-of-the-art systems are known to confabulate and mislead under the guise of authoritative assistance. They are also unable in many contexts to avoid disclosure of information on which they've been trained.
- How is that information biased? All data collections come with inherent biases, which makes understanding of those biases critical. Based on the training data, we need to know: Which perspectives are overrepresented? Which are missing entirely? Who decided what this AI should "know" and what it should conveniently forget? The context of the information is critical to understanding an AI system's output and to monitoring it's errors.
- Where are the human review checks and balances? Are employees trained to evaluate the GSAi system's output? Or are they told that the tool knows better? Is there a thoughtful oversight mechanism, or is this AI making recommendations that go straight to implementation without human judgment? The manner AI chatbots' conversations often make them sound more believable than they are. Are we replacing bureaucratic inefficiency with algorithmic overconfidence?
A Path Forward: Ethical AI Implementation
The development of government AI systems demands a robust ethical framework built upon safety, privacy, and inclusivity. We urge Congress to mandate the following requirements for DOGE, GSAi, and all government AI initiatives:
- Governance Must Come First:
Has DOGE established clear policies addressing the adverse impacts of data collection and use? Are ethical concerns being addressed at every stage of the AI lifecycle? Without structured governance, we risk operationalizing existing biases into permanent features of our government.- Transparency Cannot Be Optional:
Citizens of the USA deserve consistent transparency regarding what data DOGE's chatbot collects and how it's used. Can its algorithmic decisions be explained in non-technical terms to both government employees and the public? Opacity breeds mistrust, especially in government technology. AI choices must be explainable, especially in a government context.- Security & Safety Are Non-Negotiable:
Has DOGE prioritized data de-identification and anonymization? What third-party monitoring evaluates algorithmic decisions? Government AI must adhere to the highest standards of data protection while preventing discriminatory impacts on citizens. Does the data ingested by GSAi ignore cross-department firewalls that otherwise prevent unwarranted information sharing?- Data Quality Is EVERYTHING:
Data Quality determines AI Quality. What measures ensure the data feeding DOGE's AI is accurate, complete, and timely? Has systematic testing identified potential errors and biases? Poor quality data inevitably produces poor quality tools which results in poor quality governance.- Training Is Critical:
Have all employees using this system received comprehensive ethics, bias, and fairness training? Do they understand when to trust--and when to question--AI recommendations? Do they know how to maximize use of the chatbot to help them not only be more efficient but do their jobs better? Technology without proper training is a recipe for misuse that we cannot afford, and we can't have government staff just saying, "well, that's what the computer said."- Continuous Monitoring Is Essential:
What structured feedback loops exist to identify unintended consequences of this AI deployment? How frequently is the system evaluated against ethical standards? AI systems must be continuously monitored, with clear thresholds that trigger review when problems emerge.
When a system is designed to "help government employees be more efficient," we must ask: efficient at what, exactly? Efficiency without ethics is not progress. It's simply automating our mistakes at greater speed and scale. We cannot afford amplifying narratives that penalize people based on socioeconomic status, gender, sexual orientation, religion, zip code or political affiliation.
The creation of an AI assistant for government work isn't inherently concerning. It's the lack of transparency around its development, deployment, and oversight that should give everyone pause. Technology moves quickly, but democratic oversight moves slowly by design. This gap cannot become a loophole through which accountability escapes and specific individuals use the data troves in the US government to train their algorithm for competitive advantage and capitalistic gain.
Take a moment to explore the power of humanity. We often think of ourselves as individuals separate from each other or in competition with each other. This competitive worldview clouds our perceptions and stifles some of our most wonderful human qualities.
Let this critical moment be the reminder we need to see the world through a new paradigm--one that allows our humanity to flourish, to be generous, and to be kind. Remember that each of us is human because of the humanity of others.
This is the concept of Ubuntu. Wholeness, compassion for life, and the very essence of what it means to be human: to know that you're bound to others in the bundle of life. In this case,
we are ALL bound to the emerging technology decisions being made with no transparency to ensure what's deployed considers the humanity of others.
We don't mean to dump this all on you. There are plenty of AI experts and researchers in the world, us included, who can help. What we can say with absolute confidence is this: those that have billions at stake in the AI race cannot be relied upon to self-police. They'll be happy to tell you that regulation will just slow everything down, and we'll be happy to tell you that's exactly what needs to happen. Not unreasonably slow, mind you, but just slow enough that everyone understands the gravity of what we're working with before irreversible mistakes are made. Oversight is required at the highest levels. However it gets done, we urge each and every one of you to demand answers to these questions and establish comprehensive ethical AI standards before this digital DOGE fetches any more sticks of power that serve only DOGE. The efficiency of government matters far less than its effectiveness, fairness, and accountability to the people it serves.
With deep concern and a desperate hope that someone is actually reading this,Yvette Schmitter & Blake Crawford
Ethical AI Researchers
Fusion Collective LLC