Research Engineer, Monetization
As a Research Engineer in OpenAI's Monetization Group, you will design and deploy advanced machine learning models to solve real-world problems, bringing research from concept to implementation and creating AI-driven applications with direct impact. You will collaborate closely with researchers, software engineers, and product managers to understand complex business challenges and deliver AI-powered solutions. Your work includes implementing scalable data pipelines, optimizing models for performance and accuracy to ensure they are production-ready, and monitoring and maintaining deployed models to ensure they continue delivering value. You will stay ahead of developments in machine learning and AI by engaging with the latest research, participate in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices.
Technical Product Manager, AI
Lead a cross-functional team including engineers, designers, data scientists, and researchers to develop generative AI-enabled solutions for external riders and internal operations. Drive discovery into unmet needs, shape product vision, define priorities to achieve customer and business objectives, establish success metrics, and explore technical feasibility. Work closely with leadership across Product & Experience, Software, and Vehicle Engineering to implement AI solutions for the ride-hail service. Design AI-generated capabilities to enhance consumer experience, utilize data and market insights to guide product strategies, integrate user research into product requirements, oversee planning and management of tools and product scalability, collaborate with engineers and designers, coordinate cross-functional teams to meet milestones, lead the creation and launch of generative AI products, and develop and analyze performance metrics to gauge product success.
Staff Research Engineer, Voice
As a Staff Voice Research Engineer, you will lead the development of models and algorithms powering Decagon's real-time voice agents and manage multi-quarter initiatives to improve speech understanding, naturalness, turn-taking, and resilience in real-world conditions. Responsibilities include leading research and engineering efforts to enhance core conversational capabilities such as instruction following, retrieval, memory, and long-horizon task completion; building and iterating on end-to-end models and pipelines to optimize quality, efficiency, and user experience; partnering with platform and product engineers to integrate new models into production systems; breaking down ambiguous research ideas into clear, iterative milestones and roadmaps; mentoring other researchers and engineers; setting technical direction; and establishing best practices for applied research and engineering.
Staff Research Engineer
As a Staff Research Engineer at Decagon, you will be responsible for building industry-leading conversational AI models that power Decagon’s agent, taking them all the way from idea to production. You will own multi-quarter initiatives that enhance the agent’s reliability, capability, and efficiency. Your responsibilities include leading research and engineering efforts to improve core conversational capabilities in production such as instruction following, retrieval, memory, and long-horizon task completion. You will build and iterate on end-to-end models and pipelines optimizing for quality, efficiency, and user experience. Additionally, you will partner with platform and product engineers to integrate new models into production systems. You will break down ambiguous research ideas into clear, iterative milestones and roadmaps. You will also mentor other researchers and engineers, set technical direction, and establish best practices for applied research and engineering.
Production Engineer - Maritime
The role involves developing machine learning and artificial intelligence systems by leveraging and extending state-of-the-art methods and architectures, designing experiments, and conducting benchmarks to evaluate and improve AI performance in real-world scenarios. The candidate will participate in impactful projects and collaborate with multiple teams and backgrounds to integrate cutting-edge ML/AI into production systems. Responsibilities also include ensuring AI software is deployed to production with proper testing, quality assurance, and monitoring.
Forward Deployed AI Engineering Manager, Enterprise
Translate research into product by working with client-side researchers on post-training, evaluations, safety/alignment, and building the primitives, data, and tooling needed. Partner deeply with core customers and frontier labs, working hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work by translating customer and research objectives into clear, technically rigorous proposals, including scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Collaborate with customer-side researchers on post-training, evaluations, and alignment to design data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle of projects including leading discovery, writing PRDs and technical specs, prioritizing trade-offs, running experiments, shipping initial solutions, and scaling successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements by independently running technical working sessions with senior customer stakeholders, defining success metrics, surfacing risks early, and driving programs to measurable outcomes. Partner across Scale with research, platform, operations, security, and finance teams to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier by designing and standing up robust evaluation frameworks, closing the loop with data quality and feedback, and sharing learnings that elevate technical execution across accounts.
Senior Software Engineer, ML Core
Design, develop, and deploy custom and off-the-shelf ML libraries and toolings to improve ML development, training, deployment, and on-vehicle model inference latency. Build tooling and establish development best practices to manage and upgrade foundational libraries such as Nvidia driver, PyTorch, TensorRT, to improve ML developer experience and expedite debugging efforts. Collaborate closely with cross-functional teams including applied ML research, high-performance compute, advanced hardware engineering, and data science to define requirements and align on architectural decisions. Work across multiple ML teams within Zoox, supporting in- and off-vehicle ML use cases and coordinating to meet the needs of vehicle and ML teams to reduce the time from ideation to productionization of AI innovations.
Software Engineer - Embedded NixOS
You will develop ML/AI that leverage and extend the latest state-of-the-art methods and architectures, design experiments and conduct benchmarks to evaluate and improve their performance in real-world scenarios, work on impactful projects, and collaborate with people across several teams and backgrounds to integrate cutting edge ML/AI in production systems.
Senior Software Engineer, Agentic Data Products
The role involves translating AI research into product solutions by working with client-side researchers on post-training, evaluations, safety, and alignment, building necessary primitives, data, and tooling. The candidate will partner closely with core customers and frontier labs to address complex technical problems related to model improvement, performance, and deployment. They are responsible for shaping and proposing model improvement work by translating customer and research objectives into technically rigorous proposals, scoping work into defined statements of work and execution plans. The role requires owning the end-to-end lifecycle of projects including leading discovery, writing PRDs and technical specifications, prioritizing trade-offs, running experiments, shipping initial solutions, and scaling pilots into repeatable offerings. The candidate will lead high-stakes engagements with senior stakeholders, define success metrics, identify risks early, and drive programs to measurable outcomes. They will collaborate across teams including research, platform, operations, security, and finance to deliver reliable, production-grade solutions. Additionally, the position involves designing and implementing robust evaluation frameworks, closing feedback loops on data quality, sharing learnings, and elevating technical execution across accounts.
Senior Software Engineer, Pilots
As a Senior Software Engineer on the Pilots team, the responsibilities include delivering robust, thoroughly tested, and maintainable C++ code for edge and robotics platforms, designing, implementing, and owning prototype perception systems that may transition into production-grade solutions, constructing and refining real-time perception pipelines including detection, tracking, and sensor fusion, adapting and integrating ML and CV models for Hayden-specific applications, driving technical decision-making balancing prototyping speed with production readiness, collaborating with the Product team and cross-functional Engineering departments, and contributing to shared infrastructure, tooling, and architectural patterns as pilots mature into foundational products.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.