Most decisions that shape our daily lives no longer happen in plain sight. They are made quietly, in the background, by systems we rarely see and barely understand. From policing to social media, algorithms are increasingly deciding who is visible, who is trusted – and who is treated as a risk.
For Roma communities, this shift is not neutral. It is a continuation of existing discrimination, now reinforced by technology.
What is algorithmic bias?
Algorithmic bias refers to the way automated systems sort, score and predict information about people based on data that already reflects social inequalities. These systems are often presented as objective or neutral. In reality, they learn from historical data – and when that data contains patterns of racism, exclusion or discrimination, the technology reproduces those patterns.
This means that entire groups can be systematically categorised as more suspicious, less deserving or more likely to offend, not because of individual behaviour, but because of how they are represented in the data.
Algorithms do not make neutral decisions; they replicate the inequalities they are trained on.
For Roma, this dynamic is particularly dangerous. Across Europe, Roma communities are already over-policed, over-controlled and under-protected. AI systems do not correct these imbalances – they risk entrenching them.
When “neutral” technology reproduces discrimination
A clear example comes from policing practices in the Netherlands. A predictive policing system was used to identify so-called “high-risk” pickpockets in shopping areas. It combined police records with live data sources, including WiFi tracking and number plate recognition, and was used to flag Eastern Europeans and Roma as likely offenders.
While such systems appear neutral on paper, in practice, they automate ethnic profiling.
Police officers are guided by algorithmic outputs that claim to identify “risk”. The result is that Roma drivers, workers and shoppers are stopped more frequently, fined more often and subjected to increased surveillance. Rather than preventing crime, these systems concentrate suspicion on communities that have historically been stigmatised, giving longstanding prejudices a new technological legitimacy.
What looks like innovation often functions as automation of discrimination.
Everyday life under algorithmic suspicion
The expansion of AI systems that claim to detect “suspicious behaviour” through facial analysis or movement tracking raises fundamental concerns.
For Roma, this is not an abstract technical debate. It directly affects the ability to move freely in public space without being monitored or flagged, to participate in demonstrations without fear, or simply to send children to a football match or a shopping centre without anxiety.
At stake are basic rights: freedom of movement, safety and equal participation in public life.
The hidden bias of social media algorithms
Algorithmic bias is also embedded in the digital public sphere. Research conducted by ERGO Network and the Roma Civil Monitor project highlights how social media platforms shape visibility and discourse.
Algorithms determine what content is promoted, what spreads and what remains unseen. Monitoring carried out under the TAAO project reveals a consistent pattern: content related to Roma is overwhelmingly negative or hostile. Even when Roma youth report violent hate speech, platforms rarely take action.
Out of 40 reported cases, only two resulted in a positive response. The vast majority of harmful content remained online.
This reflects another form of algorithmic bias. Not only does discriminatory content persist, it is also repeatedly amplified to wide audiences, while Roma voices, media and human rights work are marginalised.
Young Roma involved in the monitoring described feeling desensitised, discouraged and increasingly reluctant to speak out, as reporting mechanisms appear ineffective and the online environment remains hostile.
When harmful content is amplified and ignored at the same time, bias becomes systemic.
A gendered dimension: Roma women and girls
Roma women and girls face compounded risks.
They are targeted through both racist and sexist narratives. Algorithmically amplified content about “welfare fraud”, “too many Roma children”, or hyper-sexualised depictions of Roma women contributes to a climate of harassment, abuse and threats – both online and offline.
At the same time, AI-based moderation systems often fail to detect antigypsyism, particularly when it is expressed through coded language, irony or imagery. As a result, harmful content frequently remains unchallenged.
What needs to change
A human rights-based approach to AI requires several urgent steps.
Certain uses of AI should be prohibited altogether. Predictive policing systems that profile individuals or neighbourhoods, as well as biometric mass surveillance in public spaces, are fundamentally incompatible with the principle of non-discrimination. Civil society organisations, including Roma groups, have called for these practices to be banned in EU law and national frameworks, rather than merely classified as “high risk”.
Where AI systems are deployed, Roma and other racialised communities must be involved in their design, assessment and oversight. This includes the systems platforms use to rank content and moderate hate speech. While EU legislation such as the Digital Services Act and the AI Act includes references to fundamental rights safeguards, there remains a significant gap between these legal provisions and the lived experience of Roma users.
Improved evidence is also essential, but it must be gathered with strong safeguards. Many AI systems operate with limited transparency, and there is little disaggregated data on who is affected by automated decisions. At the same time, Roma communities have well-founded concerns about data collection. Monitoring efforts must therefore be developed in partnership with Roma communities, with clear protections to ensure that data is used to challenge discrimination rather than reinforce it.
A broader fight against structural racism
Algorithms do not create bias; they reproduce it.
When policing practices, welfare systems and media narratives are already shaped by antigypsyism, AI does not introduce neutrality. Instead, it risks deepening existing inequalities in decision-making processes.
Addressing algorithmic bias is therefore not a narrow technical issue, but part of a wider struggle against structural racism.
Nothing about Roma without Roma
The principle remains clear: nothing about Roma without Roma.
As digital technologies continue to shape public life, it is essential to ensure that they do not hard-code antigypsyism into policing, welfare systems, migration management or the online public sphere. Any human rights framework for AI must include Roma and other highly stigmatised communities as active partners, rather than treating them as an afterthought.