Physics Researcher (Python) - Freelance AI Trainer
Contributors may design rigorous physics problems reflecting professional practice, evaluate AI solutions for correctness, assumptions, and constraints, validate calculations or simulations using Python (NumPy, Pandas, SciPy), improve AI reasoning to align with industry-standard logic, and apply structured scoring criteria to multi-step problems.
RTL Engineer, Automotive Robotics
As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify CPU and AI technologies for next-generation automotive applications. This role involves shaping the architectural direction of the automotive and robotics portfolio to ensure products meet the highest expectations for performance, safety, reliability, and security. It requires strong technical leadership, systems thinking, and cross-functional collaboration to deliver world-class automotive solutions. The position is remote and based out of North America.
Senior Performance Engineer- Pretraining
Engineer the systems required to train foundation models at scale with the objective to maximize hardware utilization and training throughput on large-scale GPU clusters. Profile training loops using tools such as PyTorch Profiler, Nsight Systems, and Nsight Compute to identify system- and kernel-level bottlenecks and maximize model throughput. Configure and tune composite parallelism strategies, optimizing load balance, minimizing critical-path bottlenecks, and managing communication-to-computation trade-offs for large-scale large language model training. Partner with AI Researchers to define model architectures for hardware efficiency without compromising convergence.
AI Deployment Engineer | Codex
Serve as the primary technical subject matter expert on OpenAI Codex for a portfolio of customers, embedding deeply with them to enable their engineering teams and build coding workflows. Partner directly with customers to design and implement AI-enhanced development workflows, from rapid prototyping through scalable production rollout. Build high-quality demos, reference implementations, and workflow automations, using Codex itself as part of the development process. Lead large-format workshops, technical deep dives, and hands-on enablement sessions that help engineering organizations adopt AI coding tools effectively and safely. Contribute technical content including examples, guides, patterns, and best practices to the OpenAI Cookbook to help the broader developer community accelerate their work with Codex. Gather high-fidelity product insights from real customer deployments and translate them into clear product proposals and model feedback for internal teams. Influence customer strategy and decision-making by framing how AI coding tools fit into their software development lifecycle, technical roadmap, and organizational workflows. Serve as a trusted advisor on solution architecture, operational readiness, model configuration, security considerations, and best-practice adoption.
Senior Python Engineer - AI Testing Project (Freelance, Mindrift)
Create functional black box tests for large codebases in various source languages. Create and manage Docker environments to ensure 100% reproducible builds and test execution across different platforms. Monitor code coverage and configure automated scoring criteria to meet industry benchmark-level standards. Leverage LLMs such as Roo Code and Claude to accelerate development cycles, automate repetitive tasks, and improve overall code quality.
Senior ML Operations (MLOps) Engineer
As a Senior ML Operations Engineer at Eight Sleep, you will pioneer cutting-edge ML technologies and integrate them into products and processes for health monitoring. You will own the design and operation of robust ML infrastructure by building scalable data, model, and deployment pipelines to ensure reliable model delivery to production. Your role involves partnering cross-functionally with R&D, firmware, data, and backend teams to ensure ML inference operates reliably and scales across Pods globally. You will optimize ML systems for cost-effectiveness, scalability, and high performance by managing compute, storage, and deployment resources during training and inference. Additionally, you will develop tooling, microservices, and frameworks to streamline data processing, experimentation, and deployment, and maintain clear and direct communication within a remote work environment.
Partner AI Deployment Engineer
The Partner AI Deployment Engineer leads technical delivery with OpenAI partners across EMEA, supporting the design, deployment, and scaling of production-grade AI solutions across multiple industries and use cases. They act as the primary technical delivery partner for OpenAI partners, working with partner delivery teams and customer stakeholders to translate solution designs into deployable, production-ready architectures on the OpenAI platform. The role includes supporting customer time to value through hands-on prototyping, integration support, architectural guidance, and troubleshooting during critical phases of delivery. The engineer collaborates closely with Solutions Engineers, Forward Deployed Engineers, and other ADEs to engage the right technical expertise from design through production rollout. They help partners operationalize solutions by addressing scalability, reliability, security, and safety considerations for enterprise production environments and contribute to reusable deployment patterns, reference architectures, and delivery guidance for repeatable execution. The engineer acts as a technical quality and governance point during deployments to ensure solutions meet OpenAI’s standards before and after go-live and captures feedback from deployments to share insights for improving delivery playbooks and platform capabilities.
Freelance Automotive Engineering & Python Expert - AI Trainer
Design graduate- and industry-level automotive engineering problems grounded in real practice; evaluate AI-generated solutions for correctness, assumptions, and engineering logic; validate analytical or numerical results using Python (NumPy, SciPy, Pandas); improve AI reasoning to align with first principles and accepted engineering standards; apply structured scoring criteria to assess multi-step problem solving.
Tech Lead, Android Core Product - Dresden, Germany
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Tech Lead, Android Core Product - Leipzig, Germany
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for various use cases. Deploy and operate the core machine learning inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve the performance, latency, throughput, and efficiency of deployed models. Build tools to monitor bottlenecks and instability sources, then design and implement solutions to address the highest priority issues.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.