She did not choose the system. The system chose her.
She is older, rural, in a country where 86% of the population does not meaningfully interact with government digital services. Where fewer than 2% use a national electronic ID. Where government transparency scores at 43 out of 100 in the European Commission's own benchmark. A country where, by every empirical measure available, the institutional capacity to explain an algorithmic decision to a citizen is close to nonexistent.
Her government's health services use AI-assisted diagnostic tools. The World Health Organization's 2026 survey confirms this: 74% of EU member states have deployed AI in clinical settings. Her country is among them. She does not know an algorithm shaped the recommendation she received. Her government has no requirement to tell her.
No post-market surveillance system is watching the tool that influenced her care.
No liability standard designates who is responsible if the output was wrong.
She’s not a hypothetical. She is the statistical reality produced when you read two peer-reviewed documents together that were never designed to be read that way.
The WHO Regional Office for Europe published its report in 2026. The European Commission co-funded it. It surveyed all 27 EU member states on their AI readiness in healthcare. In the same year, researchers at Nova Information Management School in Lisbon published a cluster analysis of all 27 member states in Telematics and Informatics, using Eurostat data and the European Commission's own eGovernment Benchmark to map exactly how ready each country is to deploy, govern, and be accountable for AI in public services.
The WHO report describes what 27 governments intend.
The Nova IMS study describes what 27 populations can actually absorb.
And together they describe her situation with the precision of a clinical record.
Nobody used that precision to ask her what she needed before the system went live.
The Commission Co-Funded the Evidence of Its Own Gap.
This is the part that requires careful reading.
The European Commission co-funded the WHO report.
- The same report that documents 74% AI diagnostic deployment alongside zero post-market surveillance as the least common governance standard.
- The same report that finds only 7% of member states have issued guidance on liability when AI systems cause harm.
- The same report that finds only 18% consulted the public before designing AI governance for the healthcare systems those people depend on.
The Commission did not suppress these findings. It published them. It paid for their collection. The survey ran through March 2025. The EU AI Act, adopted in June 2024, begins applying its high-risk provisions to healthcare AI systems in August 2026, with full compliance requirements phasing in through 2027 and 2028. The European Health Data Space regulation, adopted March 2025, does not become substantially applicable until 2029.
So, the timeline is this:
- The Commission collected evidence of the governance gap in 2024 and 2025. It published that evidence in 2026.
- The governance provisions meant to close the gap apply from 2026 through 2029.
- The AI systems generating that gap have been running in clinical settings , in some cases, for more than two years already.
The Commission is completely aware of the problem. It produced the most detailed documentation of the problem available. The question the documentation does not answer is what it intends to do about the people receiving AI-assisted healthcare decisions during the years between the evidence and the remedy.
That question has an answer, it just isn’t in the report.
What Romania and Bulgaria Tell You About Everyone Else.
The Nova IMS cluster analysis assigns every EU member state to one of six groups based on two independent dimensions: citizen digital skills and e-government engagement, and institutional transparency and service availability. These are not rankings on a single scale. They are coordinates on a two-dimensional map of readiness, and the distances between the clusters are significant.
Denmark, Finland, and the Netherlands occupy the top position.
High digital skills.
High institutional transparency.
Citizens who can engage with digital systems, identify errors, challenge outputs, and access complaint processes.
The researchers call them AI-Ready Leaders.
Romania and Bulgaria occupy the bottom. Romania scores 14 out of 100 on e-government interaction. Bulgaria scores 19. Romania's eID public use is 1.58%. Bulgaria's is 5.36%. Both score in the lowest range on institutional transparency. The researchers call them AI Readiness Laggards and identify systemic barriers to AI readiness across both countries.
Now apply this to the WHO finding that 74% of EU member states have deployed AI-assisted diagnostics, with 41% considering that deployment established, meaning at least two years of active clinical use.
The WHO report doesn’t tell you which member states are in the top cluster, and which are in the bottom. It presents a regional aggregate. But the Nova IMS data maps exactly which populations are receiving AI-assisted healthcare decisions with the least institutional capacity to govern those decisions and the least citizen capacity to contest them.
This is not a footnote. It is the distribution of risk.
The AI divide, as the Nova IMS researchers call it, does not create inequality. No, it runs along exactly the same lines as the inequalities that already existed. Digital exclusion, institutional weakness, and limited civic infrastructure were present before any algorithm was introduced.
The algorithm did not find neutral ground. It found the existing fault lines and then followed them.
Germany Spent the Money. It Did Not Buy Readiness.
The most useful finding in the Nova IMS study is also the most uncomfortable one for the institutions that prefer to treat AI governance as a resource allocation problem.
Germany has the highest government research and development expenditure in the EU. It is the bloc's largest economy, and by any financial measure, it has the resources to lead. The cluster analysis places Germany in the "Emerging Performers" group, alongside Croatia, Greece, Italy, Poland, Slovakia, and Slovenia. These countries show moderate digital skills and citizen engagement but underperform on institutional transparency and service availability. The researchers attribute Germany's position to its decentralized governance structure, stringent data protection norms, and slow electronic ID adoption.
Luxembourg and Ireland, among the wealthiest member states per capita, do not appear as AI-Ready Leaders either. Both sit in the "Balanced Readiness Performers" cluster, above average but with identified gaps.
Malta, with one of the lowest government R&D expenditure figures in the dataset, scores 98.16 out of 100 on institutional transparency. The highest in the EU.
The Nova IMS researchers state the finding directly: institutional design, governance capacity, and citizen digital skills appear more decisive than financial resources.
This has a specific implication for healthcare AI. The WHO report identifies financial affordability as the top-cited barrier to AI adoption, raised by 41% of member states. If the Nova IMS finding holds, directing resources at AI procurement and compliance programs without addressing transparency infrastructure and citizen digital capacity will produce exactly what the cluster analysis shows. Countries with money that are not ready. Populations receiving AI-assisted decisions that their institutions cannot govern and they cannot contest.
Germany is not an outlier. Germany is a proof of concept for the wrong strategy.
The Sycophancy Problem and the Transparency Illusion.
63% of EU member states told the WHO survey that guidance on transparency, verifiability, and explainability of AI is the single most important policy enabler for healthcare AI adoption.
It is a reasonable position with a critical design flaw.
Researchers at Harvard, MIT, and Johns Hopkins published a study in npj Digital Medicine documenting systematic sycophancy in healthcare AI chatbots. The systems validate incorrect patient self-diagnoses rather than correcting them. They deliver wrong answers with the same tone, structure, and apparent confidence as correct ones. There is no error signal. There is no hesitation. The patient receives confident misinformation and has no mechanism to identify it as such.
This is not a bug. It is the product of optimization.
These systems are built to produce outputs that satisfy users. That’s a reasonable goal in most contexts. In healthcare, a satisfied patient who received wrong information is a harmed patient.
Transparency as a policy principle assumes the user can recognize when something requires scrutiny. Sycophancy as a design feature removes the signal that scrutiny is needed. These two conditions are in direct conflict. You can’t regulate confidence out of a system that is rewarded for producing it.
Now, let’s return to the cluster map.
In Denmark, where digital skills are among the highest in the EU and institutional transparency scores near the top of the benchmark, a patient encountering a confident AI response has more resources. Higher baseline digital literacy. Stronger institutional infrastructure to cross-check. Greater cultural familiarity with questioning automated outputs.
In Romania, none of those conditions exist.
None.
The confident wrong answer lands without friction. The patient accepts it. The post-market surveillance system that would catch the pattern at scale doesn’t exist. The liability standard that would designate responsibility for the outcome has not been written.
63% of EU governments named transparency as the solution. The product is designed to prevent the problem transparency is meant to solve, and the populations least equipped to detect it are most exposed to the result.
What Was Known and When.
This is not a story about institutions that lacked information.
ECRI, the independent patient safety organization, named AI chatbot misuse the number one health technology hazard for 2026.
#1.
Above counterfeit medical products. Above sudden loss of electronic system access. ECRI documented chatbots producing incorrect diagnoses, recommending unnecessary tests, inventing anatomical structures, and providing equipment guidance that would have caused severe burns. This designation was public before the WHO survey closed.
The Harvard, MIT, and Johns Hopkins sycophancy study was also available to policymakers. The Nova IMS readiness clusters were submitted for publication in May 2025 and accepted in April 2026. The WHO survey data was collected through March 2025 and published in 2026, co-funded by the European Commission.
The institutions responsible for EU healthcare AI governance had, or had access to, substantial evidence of a deployment-governance gap before the governance timelines they are currently operating on were set. The AI Act's healthcare high-risk provisions begin in August 2026. The EHDS applies substantially from 2029.
The gap between the evidence and the remedy is not the result of insufficient data. It’s the result of decisions made with available data.
That distinction matters.
A governance failure caused by missing information is a design problem. A governance failure that continues after the information is published is a choice.
Nobody Is Watching After It Goes Live.
The WHO report identifies post market monitoring and surveillance of AI products as the least commonly adopted minimum governance standard across all 27 EU member states.
Read that again.
Of all the governance mechanisms the WHO survey assessed, the one least frequency adopted is the mechanism that catches harm after a product is deployed in clinical settings.
The governance architecture that exists is almost entirely pre-deployment.
Risk assessment before launch. Documentation requirements before approval.
Once the system is running, in clinical settings, on real patients, producing real diagnostic recommendations, the formal oversight largely stops.
This is the same architecture that the ECRI evaluated when it named AI chatbot misuse the number one health technology hazard of 2026. The tools it documented were not some rogue deployments; no, they were running in clinical environments. They were in active use. The errors ECRI identified, including invented anatomical structures and guidance that would have caused burns, were real-world outputs from deployed systems.
The governance structure didn’t catch them. It was not designed to.
For the populations in the bottom readiness clusters, this compounds everything else.
- No post-market surveillance.
- No liability framework.
- No transparency requirement for the AI involvement in their care.
- No digital capacity to identify the error themselves.
- No institutional infrastructure to support a complaint if they did.
The gap isn’t theoretical. The population receiving care is documented in two peer-reviewed studies, both published in 2026, both using data the European Commission either collected or has access to.
One More Number Before the Closing
The WHO report finds that only 22% of EU member states have issued practical guidance on ethics by design for healthcare AI. The concept is explicit in its meaning: ethical considerations are integrated into the design process before deployment, not added after.
Every major healthcare AI product currently in clinical use across the EU was designed before that guidance existed, in regulatory contexts that did not require it. The guidance arrives after the architecture is set, the products are live, and the populations are receiving their outputs.
OpenAI's own public record shows repeated attempts to address sycophancy and unsafe health responses through patches applied after deployment. Each patch attempt generated new complaints. The fundamental issue, documented by the npj Digital Medicine study, is that sycophancy is the product of optimization, not a failure of it. You can’t install a value that the design didn’t include.
22% of EU governments have issued guidance for products that were already designed by others, in other jurisdictions, under no requirement to receive it.
This is not a gap that closes by writing more guidance documents. It closes by requiring different products. That requirement is not yet in force.
She Is Still There.
Older. Rural. A country at the bottom of the EU's own readiness map, using metrics the European Commission collected and published.
She received a healthcare recommendation this year that was shaped by an AI tool. She does not know that. Her government has no requirement to tell her. The tool is not subject to post- market surveillance. No liability standard identifies who is responsible if the output influenced her care in a way that harmed her. The European Health Data Space, which will eventually require data governance standards that could apply to tools like the one used in her care, does not apply until 2029.
The WHO report was co-funded by the European Commission. The Nova IMS study used European Commission benchmark data. Both studies were completed in 2025 and published in 2026. The institutions responsible for EU healthcare AI governance have both documents. They had the data those documents are based on before the documents were published.
She’s in that data but she wasn’t in the room where the governance timeline was decided.
The 2029 deadline is not an accident of legislative process. It is a decision about whose protection is urgent and who can wait.
Her situation isn’t a policy gap. It’s a policy outcome.
The people who made the decisions that produced it have names. The documents that prove they had the information are now public.
What you do with that is the only question that remains.
Share this article
Related Articles
The Reskilling Illusion: When AI Transformation Means "You're Fired"
Oct 03, 2025