AI Safety Research

Pioneering
Multi-Agent Safety

Advancing AI safety through rigorous research, threat assessment, and mitigation strategies that prioritize safeguarding humanity.

Our Mission

AI systems are rapidly becoming central to economic infrastructure and high-stakes decision-making.

As multi-agent AI systems interact in markets, supply chains, and critical services, they create complex emergent dynamics that are difficult to predict or control. Our research focuses on identifying and mitigating these systemic risks to ensure AI deployment remains safe and aligned with human values as these systems scale.

About Us

EuroSafeAI focuses on research in the areas of AI safety, security, and multi-agent systems.

We advance AI safety and security by developing risk assessments and mitigation strategies. We target scenarios where AI systems may act contrary to developer intent. We value curiosity, ethics, and a proactive, responsible mindset.

Research Focus

Our Three Pillars

We focus on critical areas of AI safety to ensure advanced systems remain beneficial and aligned with human values.

View our research

Preventing AI Misuse

Keeping models from helping people do harm, even when prompted. Developing robust safeguards against malicious use.

Multi-Agent Safety

Ensuring groups of agents can safely interact with the real world and each other without unintended consequences.

Collaborations

Our Collaborators

We collaborate with leading research labs, governments, and foundations to advance AI safety on a global scale.

University of TorontoETH ZurichMax Planck Institute for Intelligent SystemsUK AISIUniversity of MichiganSchwartz Reisman Institute - University of Toronto

Join Our Mission

We're looking for talented researchers and professionals who are passionate about ensuring AI benefits humanity.