Our compliance policies
SAFEGUARDING AT ERGO NETWORK
At ERGO Network, we are committed to creating a safe, respectful and inclusive environment for everyone we work with especially our members, partners, and the communities we serve.
Safeguarding is not only about protecting individuals from harm; it is also about ensuring their well-being. It’s about actively building trust, preventing abuse, and holding ourselves accountable to the people we aim to empower.
Safeguarding at ERGO Network means taking all reasonable steps to prevent any form of harm, abuse, exploitation or harassment in the context of our work. This includes harm to:
- Children and young people
- Roma and pro-Roma grassroots partners
- Staff, volunteers, and collaborators
- People in vulnerable situations or underrepresented groups
We promote safeguarding in every aspect, from our secretariat and partner engagement to online communication and public events.
ERGO Network has adopted internal safeguarding policies in line with international standards. We expect all staff, board members, partners and project collaborators to uphold these principles and contribute to a culture of safety and mutual respect.
If you experience or witness inappropriate behaviour, abuse, or safeguarding concerns in connection with our work, please contact us confidentially at: c.sudbrock@ergonetwork.org
Artificial Intelligence (AI)
At ERGO Network, we recognise the growing role of artificial intelligence (AI) in shaping our societies, and we are committed to using it responsibly.
Our AI Policy sets clear rules for the safe, ethical and transparent use of AI tools aligned with our mission and values, and fully compliant with the EU AI Act.
This policy applies to all ERGO Network staff, consultants, volunteers and members who use AI in any part of our work whether in operations, communications, research or events. It complements our other governance documents, including our Diversity & Inclusion Policy, Safeguarding Policy and Staff Working Regulations.
We use AI in ways that:
- Protect rights and empower Roma communities
- Promote transparency and accountability
- Challenge discrimination and bias
- Strengthen digital inclusion
- Follow the law and ethical standards
Our Guiding Principles
- Human-centred and rights-based All AI use must uphold dignity, equality and inclusion especially for Roma and other marginalised groups.
- Transparent We will always be clear when and how AI tools are used, both internally and externally.
- Accountable AI use must be traceable. Everyone is responsible for their own use of AI, and oversight is in place.
- Non-discriminatory We reject any AI that reinforces antigypsyism, racism, sexism, ableism or other harmful biases.
- Safe and secure We only use tools that meet EU safety standards. We assess risks before any new adoption.
- Digitally inclusive We commit to ensuring AI does not harm or exclude Roma or other groups facing digital disadvantage.