Why Faculty?
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.
We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.
Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.
AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.
About the Team
Faculty’s Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1.
Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer-reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety-relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
About the role
This is a brand new senior leadership role to provide technical leadership of Faculty's work on AI safety for the Foundation Labs - and presents a unique opportunity to shape how AI safety is done globally.
Faculty is one of the world's leading applied AI companies, helping many of the organisations that shape our world to adopt AI successfully and safely. We play an important role in the emerging AI safety ecosystem. We already have many of the key Frontier Labs as clients, including Open AI and Anthropic, for whom we provide third-party red teaming, technical testing and other AI safety services. And we work with the UK government and other international governments on AI safety, including helping set up the AI Security Institute and delivering technical work which catalysed the first global AI Safety Summit at Bletchley Park in 2023.
With the recent announcement of Faculty's acquisition by Accenture, we are investing to take our work on AI safety to global scale, and this role will be key to shaping that. This will include:
The opportunity to hire and build a world-class AI safety technical team - of calibre unmatched outside of the Labs themselves
The opportunity to design and lead an AI safety R&D programme - creating the advances which will enable AI safety at scale to keep pace with model advances
The opportunity to build our work with the Frontier Labs to scale - helping to test and assure new frontier models ahead of public release
The opportunity to contribute to and shape the international debate on AI safety, including with governments and other key bodies, working closely with Marc Warner Faculty's founder & CEO.
This role will suit someone with a deep passion and commitment to AI safety, and represents a unique opportunity to contribute to this agenda globally.
What you'll be doing:
Owning the technical strategy for AI Safety by determining research directions and building technologies that mitigate risks from alignment to societal harms.
Leading a high-performing R&D team through intentional hiring, mentorship, and the cultivation of a culture defined by technical excellence and high output.
Driving academic impact by guiding complex machine learning projects and securing top-tier publications that cement Faculty’s reputation in the safety domain.
Shaping market-leading offerings for frontier labs and security institutes, translating cutting-edge R&D into practical, groundbreaking safety solutions.
Overseeing technical delivery of AI safety and security projects, ensuring scientific rigor and high-quality outputs across evaluations and red-teaming.
Representing Faculty externally as a primary technical voice, delivering influential thought leadership and speaking at major global industry events.
Collaborating cross-functionally with business unit directors and commercial teams to align research investment with strategic growth and client needs.
Who we're looking for:
You have a proven track record of designing and leading high-performing technical teams, with the ability to manage R&D budgets and mentor senior technical staff.
You bring deep expertise in AI safety research, specifically regarding alignment, interpretability, and robustness in large language models (LLMs) or safety-critical systems.
You possess a strong scientific background evidenced by high-impact machine learning publications and a comprehensive understanding of transformer architectures.
You are a strategic visionary capable of setting research priorities that align with long-term organisational goals while remaining at the cutting edge of field developments.
You are a compelling communicator who can synthesise complex technical concepts into narratives that influence both C-suite executives and the broader research community.
You exhibit strong commercial acumen and stakeholder management skills, allowing you to navigate complex organisations and accelerate the delivery of high-value projects.
Interview Process
Talent Team Screen (45 mins)
Principles and Experience interview (60 mins)
Research Proposal (90 mins)
Leadership Interview (60 mins)
Meet with CEO (30 mins)
Our Recruitment Ethos
We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Some of our standout benefits:
Unlimited Annual Leave Policy
Private healthcare and dental
Enhanced parental leave
Family-Friendly Flexibility & Flexible working
Sanctus Coaching
Hybrid Working
If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.





