About Faculty
At Faculty, we transform organisational performance through safe, impactful and human-centric AI.
With more than a decade of experience, we provide over 350 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme.
Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI.
Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.
About the Team
Faculty’s Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1.
Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer-reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety-relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
About the role
We are seeking a Senior Research Scientist to join our high-impact R&D. You will lead novel research that advances scientific understanding and fuels our ambition to build safe AI systems. This is a crucial opportunity to join a small, high-agency team conducting vital red teaming and evaluations for frontier models in sensitive areas like cybersecurity and national security. You'll shape the future of safe AI deployment in the real world.
What you'll be doing:
Owning and driving forward high-impact AI research themes in AI safety.
Contributing to the wider vision and development of Faculty’s AI safety research agenda.
Supporting Faculty’s positioning as a leader in AI safety through thought leadership and stakeholder engagement.
Shaping our research agenda by identifying impactful opportunities and balancing scientific and practical priorities.
Leading technical research within the AI Safety space, from concept to publication.
Supporting the delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with government and commercial partners.
Who we're looking for:
You have a track record of working with high-impact AI research, evidenced by top-tier academic publications or equivalent experience.
You bring proven experience or a clear passion for Applied AI safety, perhaps from labs, academia, or evaluation and red-teaming roles.
You possess deep domain knowledge in language models and generative AI model architectures, including fine-tuning techniques beyond API-level implementation.
You have practical machine learning experience, with a focus on areas such as robustness, explainability, or uncertainty estimation.
You are proficient with deep learning frameworks (PyTorch, TensorFlow, or similar) and familiar with the HuggingFace ecosystem or equivalent ML tooling.
You have demonstrable Python engineering experience to build and support robust research projects.
You have the ability to conduct and oversee complex technical research projects and possess excellent verbal and written communication skills.
What we can offer you:
The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.
Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.
Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.



