Senior ML Operations (MLOps) Engineer
As a Senior ML Operations Engineer at Eight Sleep, you will pioneer cutting-edge ML technologies and integrate them into products and processes for health monitoring. You will own the design and operation of robust ML infrastructure by building scalable data, model, and deployment pipelines to ensure reliable model delivery to production. Your role involves partnering cross-functionally with R&D, firmware, data, and backend teams to ensure ML inference operates reliably and scales across Pods globally. You will optimize ML systems for cost-effectiveness, scalability, and high performance by managing compute, storage, and deployment resources during training and inference. Additionally, you will develop tooling, microservices, and frameworks to streamline data processing, experimentation, and deployment, and maintain clear and direct communication within a remote work environment.
Manual Quality Assurance Engineer, Web Core Product
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Freelance Electrical Engineering & Python Expert - AI Trainer
Contributors may design rigorous electrical engineering problems reflecting professional practice, evaluate AI solutions for correctness, assumptions, and constraints, validate calculations or simulations using Python (NumPy, Pandas, SciPy), improve AI reasoning to align with industry-standard logic, and apply structured scoring criteria to multi-step problems.
Safety Engineer
The AI Safety Engineer is responsible for designing and building scalable backend infrastructure for content moderation, abuse detection, and agents guardrails by deploying AI/ML models into production systems. They will architect robust APIs, data pipelines, and service architectures to support real-time and batch moderation workflows. The role includes implementing comprehensive monitoring, alerting, and observability systems, establishing SLIs, SLOs, and performance benchmarks. The engineer will collaborate with ML engineers to translate research models into production-ready systems and integrate them across the product suite. Additionally, they will drive technical decisions and contribute to the vision for the safety roadmap to build next-generation platform guardrails for scale and precision.
Applied AI Engineer – Agentic Workflows (Korea)
Work closely with enterprise customers to translate high-value, ambiguous business problems into well-framed agentic problems with clear success criteria and evaluation methodologies. Provide technical leadership across the full development and evaluation lifecycle, including post-deployment iteration, for agentic workflows. Lead the design, build, and delivery of LLM-powered agents that reason, plan, and act across tools and data sources with enterprise-grade reliability and performance. Balance rapid iteration with enterprise requirements, evolving prototypes into stable, reusable solutions. Define and apply evaluation and quality standards to measure success, failures, and regressions. Debug real-world agent behavior and systematically improve prompts, workflows, tools, and guardrails. Mentor engineers across distributed teams. Drive clarity in ambiguous situations, build alignment, and raise engineering quality across the organization. Contribute to shared frameworks and patterns that enable consistent delivery across customers.
Marketing Intern - Seoul
Help users discover and master the Dataiku platform through user training, office hours, demos, and ongoing consultative support. Analyse and investigate various kinds of data and machine learning applications across industries and use cases. Provide strategic input to the customer and account teams that help our customers achieve success. Scope and co-develop production-level data science projects with our customers. Mentor and help educate data scientists and other customer team members to aid in career development and growth.
AI / ML Solutions Engineer
The AI / ML Solutions Engineer at Anyscale is responsible for designing, implementing, and scaling machine learning and AI workloads using Ray and Anyscale directly with customers. This includes implementing production AI / ML workloads such as distributed model training, scalable inference and serving, and data preprocessing and feature pipelines. The role involves working hands-on with customer codebases to refactor or adapt existing workloads to Ray. The engineer advises customers on ML system architecture including application design for distributed execution, resource management and scaling strategies, and reliability, fault tolerance, and performance tuning. They guide customers through architectural and operational changes needed to adopt Ray and Anyscale effectively. Additionally, the engineer partners with customer MLE and MLOps teams to integrate Ray into existing platforms and workflows, supports CI/CD, monitoring, retraining, and operational best practices, and helps customers transition from experimentation to production-grade ML systems. They also enable customer teams through working sessions, design reviews, training delivery, and hands-on guidance, contribute feedback to product, engineering, and education teams, and help develop reference architectures, examples, and best practices based on real customer use cases.
Software Engineer, macOS Core Product - Virginia Beach, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core machine learning inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability, then design and implement solutions addressing the highest priority issues.
Software Engineer, macOS Core Product - Rialto, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to their customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve the performance, latency, throughput, and efficiency of deployed models. Build tools to gain visibility into bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Software Engineer, macOS Core Product - Waco, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for a diverse range of use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve the performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability, and design and implement solutions to address the highest priority issues.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.