Director, Forward Deployed Engineering
The Director of Forward Deployed Engineering will own the Forward Deployed Engineering program end-to-end, including building the team, defining the operating model, and ensuring top strategic accounts feel prioritized. Responsibilities include building, hiring, and managing a team of software engineers and managers deployed into strategic accounts; defining staffing models, engagement structures, and capacity allocation across accounts; developing specialist pods of engineers for new verticals; setting and upholding quality standards for client deliverables, documentation, and knowledge transfer. The role also requires maintaining deep technical fluency to scope custom builds, unblock engineering decisions, and evaluate solution quality; overseeing the design and implementation of tailored workflows, retrieval systems, agent tools, and knowledge sources on Harvey's platform; and ensuring solutions are operationalized with evaluations, documentation, and user training. Additionally, the Director will identify patterns across client engagements to inform product and engineering leadership about client needs and product opportunities with specificity.
Senior Program Manager, Infrastructure Strategy and Business Operations
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines, including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate RL and post-training pipelines optimizing algorithms and systems where most cost is inference. Make RL and post-training workloads more efficient with inference-aware training loops and techniques for large-scale rollout collection and evaluation. Use pipelines to train, evaluate, and iterate on frontier models based on the inference stack. Co-design algorithms and infrastructure tightly coupling objectives, rollout collection, and evaluation to efficient inference and quickly identify bottlenecks across training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, and feed insights back into model, RL, and system design. Profile, debug, and optimize inference and post-training services under real production workloads. Drive roadmap items requiring engine modifications including kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to rigorously validate improvements. Provide technical leadership, set technical direction for cross-team efforts intersecting inference, RL, and post-training, and mentor engineers and researchers on full-stack ML systems work and performance engineering.
Manager, Forward Deployed Engineer (FDE), Life Sciences
As a Life Sciences FDE Manager, you will lead and grow a team of Forward Deployed Engineers delivering production AI systems across regulated life sciences environments, being accountable for end-to-end delivery outcomes while balancing scope, speed, robustness, and risk. You will coach and develop engineers through direct feedback, maintain high technical standards, and set clear expectations for execution and ownership. This role requires operating as a player-coach, directly contributing to production systems, leading, coaching, and setting technical direction. You will guide teams through ambiguous, multi-workstream engagements involving data, workflows, infrastructure, security, and scientific stakeholders, run evaluation loops to measure model and system quality against workflow-specific scientific benchmarks, and convert results into clear roadmap input.
Senior Engineering Manager, Reinforcement Learning Environments (RLE)
The Senior Engineering Manager for the Reinforcement Learning Environments (RLE) team leads and grows a high-performing team of 8-9 engineers building reinforcement learning environments. This role involves managing, mentoring, and developing senior engineers and future engineering leaders. The manager partners closely with research, product, and operations teams to define the roadmap and execution priorities, drives the technical architecture for scalable, reliable, and extensible environment systems, and builds plug-and-play environments that integrate seamlessly with model training pipelines. The role balances platform rigor with operational complexity and data quality requirements, establishes engineering best practices around reliability, observability, and performance, and fosters a culture of ownership, velocity, and high technical standards.
Senior Manager
Lead transformational AI system implementations by scoping high-value solutions and navigating complex technical challenges alongside technical colleagues. Manage enterprise life sciences accounts, including oversight of pricing, contract negotiations, resourcing, and identifying strategic growth opportunities. Build deep trust with senior stakeholders in global enterprises through understanding how Frontier addresses their operational problems. Advocate for customer needs internally by providing product development teams with direct insights to refine and enhance the platform. Create scalable delivery assets such as playbooks and process improvements to empower external partners and internal teams. Collaborate across functions including engineering, data science, and business development to explore novel use cases and ensure seamless project coordination.
AI Implementations Manager
The AI Implementation Manager is responsible for the end-to-end delivery and stabilization of Ema's agentic AI solutions, spanning from design alignment through production rollout and steady state. This role involves ensuring solutions align with Ema’s agentic architecture and platform capabilities. The manager must develop a deep understanding of customer business processes and constraints to translate business workflows into feasible agentic AI workflows. They provide delivery-focused technical oversight, anticipating potential implementation issues such as integration, data quality, scale, and edge cases. The manager serves as the primary delivery contact for customer business and IT stakeholders and coordinates across multiple internal teams including Engineering, Product, Data, Infrastructure, and Value Engineering. They manage delivery under pressure by coaching stakeholders and teams during high-stress phases to reduce chaos. They communicate delivery progress, risks, and decisions clearly to all audiences, tracking success through adoption signals and outcome-adjacent metrics. Additionally, the role includes providing day-to-day delivery leadership and mentorship, promoting shared standards, clear ownership, and delivery discipline.
Technical Program Manager, Quality
Manage the end-to-end lifecycle of LLM projects, navigating the transition from research milestones to production-level deployments. Transform subjective user feedback into objective metrics and datasets. Design and implement technical evaluations to address issues found in the field and help integrate these evaluations into existing pipelines. Track internal and external feedback to ensure identified issues are followed through to resolution in subsequent iterations. Maintain the technical roadmap for voice-based capabilities, proactively identifying dependencies and resolving technical blockers across teams. Ensure the roadmap incorporates the work and constraints of all teams to deliver a cohesive user experience.
AI Deployment Manager
As an AI Deployment Manager, you will lead end-to-end AI deployments from kickoff to successful launch, owning project planning, timelines, execution, and delivery across customer implementations. You will act as a trusted partner to customers, helping translate business goals into successful AI deployments. You will deploy and operationalize AI models across Cresta's platform in partnership with internal teams, including rules-based models, summarization, generative knowledge assistance, and more. You will drive value realization, ensuring deployments deliver measurable results rather than just go-live dates. You will guide customers confidently through every phase of deployment, keeping momentum high and stakeholders aligned. You will collaborate closely with Solutions Engineering, Product, Customer Success, and Engineering teams. Additionally, you will anticipate risks, solve problems, and keep complex initiatives moving forward.
Manager, Forward Deployed Engineering
Lead and grow a team of Forward Deployed Engineers (FDE) delivering production systems with frontier models. Own end-to-end delivery outcomes through clarity, speed, tight coordination, and technical quality. Codify successful practices into tools, playbooks, and roadmap inputs to create leverage for OpenAI and the wider developer community. Identify early indicators in product behavior, customer environments, or delivery practices and raise them with urgency. Use judgment to distinguish which issues require action. Set a high performance bar for FDEs and support each person's growth through direct, actionable feedback. Define staffing and support models for field teams that can scale without added complexity.
Infrastructure Engineer
Help users discover and master the Dataiku platform through user training, office hours, demos, and ongoing consultative support. Analyse and investigate various kinds of data and machine learning applications across industries and use cases. Provide strategic input to the customer and account teams that help our customers achieve success. Scope and co-develop production-level data science projects with our customers. Mentor and help educate data scientists and other customer team members to aid in career development and growth.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.