Software Engineer, macOS Core Product - Stamford, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address high priority issues.
Software Engineer, macOS Core Product - Glendale, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for a diverse range of use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability, then design and implement solutions to address the highest priority issues.
Software Engineer, macOS Core Product - Jackson, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Machine Learning Engineer: ML Infra and Model Optimization
Develop and deploy LLM agent systems within the AI-powered avatar framework. Design and implement scalable and efficient backend systems to support AI applications. Collaborate with AI and NLP experts to integrate LLM and LLM-based systems and algorithms into the avatar ecosystem. Work with Docker, Kubernetes, and AWS for AI model deployment and scalability. Contribute to code reviews, debugging, and testing to ensure high-quality deliverables. Document work for future reference and improvement.
Machine Learning Engineer (AI detection, Toronto)
Design, train, and fine-tune state-of-the-art language models; develop AI agents combined with retrieval-augmented language models; build efficient and scalable machine learning training and inference systems; stay up-to-date with the latest literature and emerging technologies to solve novel problems; work closely with product and design teams to develop intuitive applications that create societal impact.
NPI Engineer
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Helix Data Creator
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Systems Integration Engineer - Actuation Systems
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Tech Lead, LLM & Generative AI (Full Remote - Andorra)
Lead the LLM team of 3 engineers, owning the architecture, training, and deployment of the models powering the core product. Architect the system and mentor the team while spending significant time hands-on writing production code in Python/PyTorch. Own the core chat loop to optimize context windows, memory/RAG retrieval, and inference latency for a real-time experience. Drive the strategy for Supervised Fine-Tuning (SFT) and RLHF/DPO (Preference Optimization) deciding when to prompt, fine-tune, and create new RAG pipelines. Manage the sourcing, labeling, and cleaning of diverse datasets to improve model steerability and multicultural performance. Design and train custom classifiers to detect and filter non-consensual or illegal content within an explicit environment, creating nuanced, context-aware moderation systems beyond simple safe/unsafe classifications.
Tech Lead, LLM & Generative AI (Full Remote - Serbia)
As a Tech Lead for the LLM team, you will architect the system and mentor the team while spending significant time hands-on in the codebase using Python and PyTorch. You will own the core chat loop by optimizing context windows, memory/RAG retrieval, and inference latency to ensure a seamless, real-time experience. You will drive the strategy for Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback / Direct Preference Optimization (RLHF/DPO), deciding when to prompt, fine-tune, or build new RAG pipelines. Additionally, you will manage the data engine by overseeing the sourcing, labeling, and cleaning of diverse datasets to improve model steerability and multicultural performance. You will design and train custom classifiers to detect and filter non-consensual or illegal content within an explicit environment, moving beyond simple binary safe/unsafe flags to create nuanced, context-aware moderation systems.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.