Investigative

Exporting Ethics: European Ethics Dumping Research Meets AI

By Carmen Gray

When European state-funded researchers conduct clinical trials or develop new technologies, they are bound by some of the world’s strictest ethical and legal standards. But those protections often stop at the EU borders, even if EU funded research does not.

Personal healthcare data is offered huge protections under EU law. Some examples of this are the General Data Protection Regulation (GDPR), Artificial Intelligence Act (AI Act), and Medical Devices Regulation (MDR). These laws and acts impose strict safeguards for data collection, however apply only within EU bounds; allowing studies which take place geographically elsewhere, regardless of funding location, to evade these strict laws.

The gap between Europe’s internal rules and its external practices has enabled research that would face significant ethical barriers at home to proceed elsewhere. These practices are often justified by efficiency or framed as capacity-building, but in practice are frequently driven by lower costs, faster approvals, access to large data pools, or from a critical viewpoint: the avoidance of regulatory constraints which would apply within Europe.

The term for this phenomenon was coined: ‘Ethics Dumping’. The definition and consideration of ‘Ethics Dumping’ originated with an EU proposal in 2013, defined as the exportation of research practices that would not be accepted in Europe on ethical grounds. Whilst the term can apply to all forms of research used abroad, the term originated in healthcare research, and many of its most prominent modern examples still occur in that field.

One example of this is research into Avian Flu, which was outsourced to Indonesia (Sedyaningsih et al., 2008). The research included samples of Indonesia participants blood with no agreement to ensure Indonesian citizens received vaccines. Another example would be cervical cancer screening in India. Extensive clinical trials were conducted to test how cheaply screening could be done, despite India already having an established method to diagnose the disease. The study used a control group of 141,000 women, all of whom received no screening whatsoever for the cancer despite its prevalence in India. Due to the lack of screening, 254 of the women within this group went on to die of undetected cervical cancer (Srinivasan, Johari and Jesani, 2017). This particular case was funded by WHO, an organisation in which all EU member states are actively present.

Despite the term only coming to light in 2013, the phenomenon of wealthy regions using the population of disadvantaged regions for resources to benefit only itself certainly has neo-colonial connotations, and is unsurprising to have evolved with the globalisation of research.

These are just two of many examples, illustrating how EU member states and other international organisations are willing to outsource the ethically fraught aspects of research- effects to which they would never subject their own citizens.

As the practice became more widespread, initiatives were developed and funded to counter the harms associated with ethics dumping. The most prominent of these is the Trust Project (https://trust-project.eu/), which aimed to promote more ethical research practices. Whilst

undoubtedly a useful tool, the project provides recommendatory suggestions rather than legally binding regulations- leaving room for researchers or institutions to ignore recommendations or cherry-pick whichever are convenient.

However, as the modern world is constantly evolving, so too are the nature of misdoings. Ethics dumping and our general understanding of it has now come across a new incentive; Artificial Intelligence.

A Swiss article from January titled ‘Accounting for EU external effects: from clinical trials to data colonialism to AI ethics dumping (Kolfschooten, Pramiti Parwani and Perehudoff, 2026) highlights the new and evolving phenomenon of Europe and their practices of AI ethics dumping.

As healthcare research increasingly relies on artificial intelligence, new diagnostic models require vast amounts of personal and often private data in order to provide accurate, useful advice to users.

The use of AI in healthcare is not the intended critique; in the right hands these systems could help to revolutionise healthcare, particularly in states or areas with lacking numbers of doctors and nurses.

Rather, in training these new AI systems, the volume of data needed is increasing and multiplying every day. The study highlights that unlike traditional research, which intends to use and analyse collected samples, AI models demand much higher numbers and sample sizes to correctly train for data interpretation. In healthcare, this often means sensitive patient records, genetic information, or diagnostic imaging collected from diverse populations. The combination of high demand and limited and regulated availability creates pressure to source data wherever it can be found; leading to AI research now surging in these Low-middle-income countries (LMIC).

One example of this trend highlighted by the study is the recent surge of Big Tech companies establishing operations in LMICs under the banner of digital inclusion. In practice, these companies often serve as gateways to the ‘datafication’ of local populations, creating long-term dependencies on their platforms and services.

To get further insight, I spoke to Iain Styles, a professor of Computer Science at Queens University whose research is in AI for biomedical science.

He noted the ethical dangers of relying on respondents with ‘less control over their data’ due to restrictions being ‘not as tight’ in the countries funding the studies. He noted numerous ethical concerns, highlighting facts such as that those participants in the study are unlikely to be paid a fair wage, may not know the full extent of the use in their data, and are unlikely to benefit from its use. For this reason, he argues that these constitute ‘unethical employment practices’, and should raise cause for concern in study.

Furthermore, he highlighted an interesting problem which is harder to research. All ethical

considerations accounted for, he argued that data collected for the purposes of training an AI model may lead to unpredictable outcomes when that model is deployed in a different demographic or community. As AI models can be ‘sensitive to small changes’, if the population to whom the model is being applied is different to that whose data it was trained, the AI system may make less accurate predictions. AI systems trained on differing populations to the users, Iain emphasised, are ‘notoriously unreliable’.

The true danger of this in the context of AI and healthcare is ‘the confidence of AI’. Noting that AI models are not consistently able to identify when they are being used out-of-context, Iain noted the potentially dangerous consequences medical diagnostic AI models being used in populations on whose data they were not trained, and how this could negatively impact the populations involved.

Taken together, these concerns illustrate how AI ethics dumping extends beyond questions of consent or fairness into deeper, systemic risks. When data is extracted from populations with fewer protections, the resulting AI systems may entrench inequality rather than reduce it by producing tools which are poorly suited to the communities they claim to serve, all the while exposing those same communities to privacy breaches, exploitation, and medical harm.

As Iain noted himself, the true scale and nature of AI research is difficult to track. Through studies we can observe just some of the potential dangers to which vulnerable populations are subjected; however this will undeniably miss some of the truth. Many of these injustices will go unnoticed and unreported, stifled by the pursuits of developed nations to gain more and gain quickly.

As a global leader in AI and research; an industry which has the potential to benefit so many, Europe must not risk its own ethical standards. If ‘European Innovation’ is to remain credible as well as competitive, it must address those on whose shoulders that innovation is built.


The Gown Queen's University Belfast

The Gown has provided respected, quality and independent student journalism from Queen's University, Belfast since its 1955 foundation, by Dr. Richard Herman. Having had an illustrious line of journalists and writers for almost 70 years, that proud history is extremely important to us. The Gown is consistent in its quest to seek and develop the talents of aspiring student writers.

Leave a Reply

Discover more from The Gown

Subscribe now to keep reading and get access to the full archive.

Continue reading