NPI Engineer
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Helix Data Creator
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Systems Integration Engineer - Actuation Systems
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Tech Lead, LLM & Generative AI (Full Remote - Andorra)
Lead the LLM team of 3 engineers, owning the architecture, training, and deployment of the models powering the core product. Architect the system and mentor the team while spending significant time hands-on writing production code in Python/PyTorch. Own the core chat loop to optimize context windows, memory/RAG retrieval, and inference latency for a real-time experience. Drive the strategy for Supervised Fine-Tuning (SFT) and RLHF/DPO (Preference Optimization) deciding when to prompt, fine-tune, and create new RAG pipelines. Manage the sourcing, labeling, and cleaning of diverse datasets to improve model steerability and multicultural performance. Design and train custom classifiers to detect and filter non-consensual or illegal content within an explicit environment, creating nuanced, context-aware moderation systems beyond simple safe/unsafe classifications.
Tech Lead, LLM & Generative AI (Full Remote - Serbia)
As a Tech Lead for the LLM team, you will architect the system and mentor the team while spending significant time hands-on in the codebase using Python and PyTorch. You will own the core chat loop by optimizing context windows, memory/RAG retrieval, and inference latency to ensure a seamless, real-time experience. You will drive the strategy for Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback / Direct Preference Optimization (RLHF/DPO), deciding when to prompt, fine-tune, or build new RAG pipelines. Additionally, you will manage the data engine by overseeing the sourcing, labeling, and cleaning of diverse datasets to improve model steerability and multicultural performance. You will design and train custom classifiers to detect and filter non-consensual or illegal content within an explicit environment, moving beyond simple binary safe/unsafe flags to create nuanced, context-aware moderation systems.
Tech Lead, LLM & Generative AI (Full Remote - Croatia)
The Tech Lead will act as a player/coach by architecting the system and mentoring the team while spending significant time hands-on in the codebase using Python and PyTorch. They will own the core chat loop, optimizing context windows, memory/RAG retrieval, and inference latency to ensure a seamless, real-time experience. The Tech Lead will drive the strategy for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF/DPO), deciding when to prompt, fine-tune, or architect a new RAG pipeline. They will manage the sourcing, labeling, and cleaning of diverse datasets to improve model steerability and multicultural performance. Additionally, they will design and train custom classifiers to detect and filter non-consensual or illegal content within an explicit environment and create nuanced, context-aware moderation systems that go beyond binary safe/unsafe flags.
Tech Lead, LLM & Generative AI (Full Remote - Gibraltar)
The Tech Lead will lead and code in the LLM team focusing on architecture, training, and deployment of models for the core product. Responsibilities include acting as a player/coach by mentoring the team and writing production code (Python/PyTorch), owning the core chat loop to optimize context windows, memory/RAG retrieval, and inference latency, driving strategies for Supervised Fine-Tuning (SFT) and RLHF/DPO, managing data sourcing, labeling, and cleaning for model steerability and multicultural performance. The role also involves designing and training custom classifiers for nuanced moderation to detect and filter non-consensual or illegal content in an explicit environment, surpassing simple safe/unsafe classifications to build context-aware moderation systems.
Member of Technical Staff, GPU Optimization
Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, to improve throughput, latency, and memory efficiency on NVIDIA GPUs; design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations; integrate low-level optimizations into PyTorch-based codebases, including custom operators, low-precision formats, and TorchInductor passes; profile and debug the entire stack from kernel launches to multi-GPU I/O paths using various profiling tools such as Nsight, nvprof, PyTorch Profiler, and custom tools; collaborate with colleagues to co-design model architectures and data pipelines that are hardware-friendly while maintaining state-of-the-art quality; stay updated on the latest GPU and compiler technologies and assess their impact; work closely with infrastructure and backend teams to improve cluster orchestration, scaling strategies, and observability for large experiments; provide clear, data-driven insights regarding performance, quality, and cost trade-offs; contribute to a culture emphasizing fast iteration, thoughtful profiling, and performance-centric design.
Python / PyTorch Developer — Frontend Inference Compiler – Dubai
You will develop and maintain the frontend compiler infrastructure that ingests PyTorch models and produces intermediate representations to optimize performance on Cerebras' AI hardware platforms. This includes collaborating with ML and compiler teams, extending PyTorch-based tooling, and working with the latest open and closed generative AI models for optimal inference.
Backend ML Engineer at Robyn AI
The Backend ML Engineer at Robyn AI is responsible for building the backend infrastructure that powers the application, including conversations, memory, real-time personalization, voice and chat interfaces, scalable infrastructure for emotional intelligence, secure and fast APIs for the iOS app, and a robust machine learning inference and fine-tuning pipeline. This role involves working and adding to the C#/.NET/ASP.NET backend API layer and progressively adding Python microservices with AI-native architecture. The engineer will own the full backend surface area including authentication, APIs, infrastructure, and orchestration, designing features for scale and velocity. Responsibilities also include building and maintaining REST and GraphQL APIs, architecting a microservice-style ML model serving backend deployed via Docker containers or AWS Lambda with async eventing and pub/sub, managing CI/CD, rollback strategies, logging, and error handling. The engineer will integrate AI and ML systems, manage vector databases for retrieval-augmented generation and personalization, build custom memory pipelines, integrate and scale inference with various models, maintain API orchestration with third-party model providers, manage AWS infrastructure and related technologies, implement search databases and infrastructure-as-code with Terraform, ensure observability with metrics and logging tools, optimize latency and caching, set up secure infrastructure for SOC-2 readiness, and design and ship emotion-aware backend systems that update in real-time. The role includes working closely with product and AI teams to tune the system's behavior based on user feedback, emotion logs, and interaction history, and owning all personalization logic.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.