Job Description
-
Developing post-training methods to ensure our models remain safe and capable in unfamiliar or adversarial scenarios.
-
Developing classifiers or system-level safeguards to detect, monitor, or prevent misuse across priority surface areas in enterprise use cases.
-
Evaluating and improving the safety of agentic models and products - developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks.
-
Building eval tooling to monitor the susceptibility of models to generate harmful content, and conducting research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse.
-
Investigating and responding to safety incident reports.
-
Manage day-to-day execution of the team's work.
-
Maintain a sufficiently deep understanding of the technical stack to make targeted contributions as an individual contributor.
-
Prioritize the team’s work and manage projects in a dynamic and fast-paced environment.
-
Coach and support your reports in understanding and pursuing their professional growth.
-
Maintain a deep understanding of the team's technical work and contribute technically yourself.
-
8+ years in AI/ML research or engineering
-
Proven leadership in building and scaling AI teams or initiatives.
-
Deep technical mastery of machine learning, deep learning, and AI systems, with hands-on experience in frontier model development.
-
You have a proven experience leading high-performance teams of researchers or engineers, ideally in a safety-relevant domain.
-
You are a highly proficient software engineer in Python.
-
You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes).
-
You are a self-starter, autonomous and a team player.
-
You have hands-on experience with training large transformer models in a distributed fashion.
-
Experience in AI safety.
-
Interdisciplinary expertise (ethics, policy, governance, or philosophy of technology) and ability to bridge technical and non-technical stakeholders.
-
You have a strong publication record in a relevant scientific domain (e.g. AI safety, alignment, interpretability).
Note that this is not an exhaustive or necessary list of requirements. Please consider applying if you believe you have the skills to contribute to Mistral's mission. We value profile and experience diversity.
Optimize Your Resume for This Job
Get a match score and see exactly which keywords you're missing
Job Details
- Category
- Research
- Employment Type
- Full Time
- Location
- Paris
- Posted
- Mar 17, 2026, 07:57 AM
- Listed
- Mar 17, 2026, 08:10 AM
About Mistral AI
Part of the growing space & AI ecosystem pushing the frontiers of technology.
More Roles at Mistral AI





Similar Research Roles



Found this role interesting?