Staff Strategic Sourcing Manager (Hardware)
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate reinforcement learning (RL) and post-training pipelines to jointly optimize algorithms and systems where most of the cost is inference. Make RL and post-training workloads more efficient with inference-aware training loops such as asynchronous RL rollouts and speculative decoding. Use these pipelines to train, evaluate, and iterate on frontier models on top of the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks across the training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, and feed insights back into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services under real production workloads. Drive roadmap items requiring engine modification including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks for rigorous validation of improvements. Provide technical leadership by setting technical direction for cross-team efforts, and mentor engineers and researchers on full-stack ML systems work and performance engineering.
Machine Learning Engineer, TTS Systems
As an ML Engineer focused on Text To Speech (TTS), you will own the deployment, optimization, and maintenance of production TTS systems. Responsibilities include deploying and optimizing large-scale TTS models into production environments for reliable, low-latency inference; implementing and refining post-training and modern inference techniques to maximize throughput and audio quality; collaborating with cross-functional teams to ensure seamless rollout, A/B testing, and iterative improvement of production models; maintaining high availability and scalable infrastructure for multi-speaker, expressive, and controllable TTS use cases; and designing and documenting best practices for efficient TTS inference and system reliability.
Research Engineer, Core ML
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines including kernel backends, speculative decoding, quantization, and profiling and optimizing performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Unify inference with RL and post-training by designing and operating RL and post-training pipelines, optimizing algorithms and systems jointly for inference-heavy workloads, and making RL workloads more efficient with inference-aware training loops. Use RL pipelines to train, evaluate, and iterate on models, co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, and quickly identify bottlenecks across all layers. Run experiments to understand trade-offs between model quality, latency, throughput, and cost, and feed insights back into design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services under real production workloads, driving roadmap items involving engine modifications, establishing metrics and experimentation frameworks to validate improvements. Provide technical leadership by setting technical direction for cross-team efforts intersecting inference, RL, and post-training, and mentoring engineers and researchers on full-stack ML systems and performance engineering.
Research Engineer / Machine Learning Engineer - B2B Applications
As a Research Engineer in OpenAI's Applied Voice Team, you will design and build advanced machine learning models including state-of-the-art speech models such as speech-to-speech, transcribing, and text to speech, transforming research breakthroughs into tangible B2B applications like API and ChatGPT AVM. You will collaborate closely with software engineers, product managers, and forward deployed engineers to understand business challenges, address customer concerns, and deliver AI-powered solutions. You will implement scalable data pipelines, optimize models for performance and accuracy, ensure production readiness, and contribute to projects requiring cutting-edge technology and innovative approaches. Additionally, you will engage with the latest developments in machine learning and AI, participate in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices. You will also monitor and maintain deployed models to ensure they continue delivering value, thereby influencing how AI benefits individuals, businesses, and society.
AI/ML Engineer
Develop, train, and optimize machine learning models for various mobile app features. Research and implement state-of-the-art AI techniques to improve user engagement and app performance. Collaborate with cross-functional teams to integrate AI-driven solutions into applications. Design and maintain scalable ML pipelines, ensuring efficient model deployment and monitoring. Analyze large datasets to derive insights and drive data-driven decision-making. Stay updated with the latest AI trends and best practices, incorporating them into development processes. Optimize AI models for mobile environments to ensure high performance and low latency.
Tech Lead, Android Core Product - Casablanca, Morocco
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve the performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Tech Lead, Android Core Product - Guadalajara, Mexico
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to their customers for a diverse range of use cases; deploy and operate the core ML inference workloads for AI Voices serving pipeline; introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models; build tools to gain visibility into bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Tech Lead, Android Core Product - Cebu, Philippines
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for a diverse range of use cases. Deploy and operate the core machine learning inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Tech Lead, Android Core Product - Alexandria, Egypt
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases. Deploy and operate the core ML inference workloads for our AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models. Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues.
Tech Lead, Android Core Product - Nairobi, Kenya
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for various use cases; deploy and operate core ML inference workloads for the AI Voices serving pipeline; introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models; build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.