Data Quality Specialist
Generate and validate high-quality data annotations based on guidelines and continuous feedback for the development and evaluation of AI models. Collaborate with the technical team to review and audit annotations, clarify requirements, share insights, and improve annotation processes, tools, and guidelines.
Senior Software Engineering Director, Developer Experience
As the Senior Director of Engineering for Developer Experience at Crusoe, you will own and drive the strategy, execution, and culture of the team responsible for how Crusoe's engineers and non-engineers build, ship, and operate software. Responsibilities include defining and executing the long-term vision for Crusoe's internal developer platform, which encompasses shared services, internal APIs, repositories, and self-service infrastructure to enable engineering teams to move quickly and confidently. You will also rapidly develop and productionize AI-powered tools for the entire company, creating and evangelizing best practices for productionizing AI-developed tools and evaluating SaaS purchases. Additionally, you will oversee the design, reliability, and continuous improvement of CI/CD pipelines, build systems, and deployment infrastructure to ensure safe and rapid scaling of engineering teams' shipping processes. Your role will also involve defining and driving organization-wide engineering productivity initiatives by establishing metrics, identifying bottlenecks, and implementing tooling and process improvements that enhance developer experience across Crusoe. People leadership is a key responsibility, including managing and growing a team of engineers and fostering a high-performance culture based on accountability, innovation, and continuous learning. Furthermore, you will collaborate with senior leaders across Engineering, Infrastructure, Security, and Product to align Developer Experience investments with company-wide engineering goals and priorities.
Deployed Engineer (Toronto)
The Deployed Engineer will co-architect and co-build production AI agents with customer engineering teams, own the technical win in pre-sales by designing POCs, answering deep technical questions, and guiding evaluations, help customers deploy and operate agent-based applications including conversational agents, research agents, and multi-step workflows, and advise customers post-sale on architecture, best practices, and roadmap-level decisions. They will also run technical demos, trainings, and workshops for developer audiences, surface field feedback, contribute reusable patterns, cookbooks, and example code that scale across customers, and occasionally contribute code upstream when it meaningfully improves customer outcomes.
Senior Product Manager – Data & Quality
Partner with frontier AI research labs to design datasets and environments that improve model performance. Lead technical conversations with customer researchers to understand model capabilities, failure modes, data requirements, and success criteria. Probe model behavior through systematic evaluation to uncover weaknesses and identify high-impact data interventions. Design evaluation frameworks, calibration processes, and quality rubrics that establish measurable project success metrics. Develop technical specifications for data projects that balance research rigor with operational feasibility. Serve as thought partner to customer research teams throughout the sales cycle, building trust and credibility. Stay current on frontier AI research, RL environment design, post-training techniques, and evaluation methodologies.
Software Development in Test Intern
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines such as SGLang- or vLLM-style systems and Together's inference stack, including kernel backends, speculative decoding (e.g., ATLAS), and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate reinforcement learning (RL) and post-training pipelines (including RLHF, RLAIF, GRPO, DPO-style methods, and reward modeling) where most of the cost is inference, jointly optimizing algorithms and systems. Make RL and post-training workloads more efficient with inference-aware training loops such as asynchronous RL rollouts and speculative decoding techniques. Use these pipelines to train, evaluate, and iterate on frontier models based on the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks in training engines, inference engines, data pipelines, and user-facing layers. Run ablations and scale-up experiments to study trade-offs between model quality, latency, throughput, and cost and integrate findings into model, RL, and system design. Profile, debug, and optimize inference and post-training services under production workloads. Drive roadmap items requiring real engine modifications, including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks to rigorously validate improvements. Provide technical leadership by setting technical direction for cross-team efforts at the intersection of inference, RL, and post-training. Mentor engineers and researchers on full-stack ML systems work and performance engineering.
Software Engineer, Internal Tools
Use proprietary software applications to provide input/labels on defined projects. Support and ensure the delivery of high-quality curated data. Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies. Interact with the technical staff to help improve the design of efficient annotation tools. Choose problems from corporate accounting fields that align with your expertise, providing rigorous solutions and model critiques where you can confidently provide detailed solutions and evaluate model responses. Regularly interpret, analyze, and execute tasks based on given instructions.
Machine Learning Operations Engineer
Optimize orchestration processes to ensure efficient deployment and management of AI models. Implement cost-saving strategies to minimize infrastructure expenses while maximizing performance. Upgrade throughput to enhance scalability and responsiveness of AI systems. Collaborate with cross-functional teams to identify bottlenecks and implement solutions to improve workflow efficiency. Ship new features and updates rapidly while maintaining high levels of quality and reliability. Deploy and monitor machine learning models produced by deep learning engineers. Design, deploy, and maintain performant and scalable processes for data acquisition and manipulation to enhance dataset accessibility. Participate actively in the team's software development process, including design reviews, code reviews, and brainstorming sessions. Maintain accurate and updated software development documentation.
Software Engineering Manager, Autonomous
As the Engineering Manager on the Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development and advancing AI and backend systems. You will oversee the technical roadmap for the Autonomous team, translating architectural complexity into clear product strategies. You will mentor a diverse group of engineers, supporting their professional growth. You will partner closely with Product and Design to ensure the agent-building tools remain intuitive while supporting technical capabilities. You will champion a 'show > tell' culture by ensuring rapid shipping with a high standard for technical stability and user experience. You will clear technical and operational roadblocks to ensure the team operates with high agency and clarity.
Software Engineering Manager, Autonomous
As an Engineering Manager on the Autonomous team, you will lead and scale a high-calibre team of engineers dedicated to defining the future of AI agent development and advancing AI and backend systems. You will oversee the technical roadmap for the team by translating architectural complexity into clear product strategies, mentor and support the professional growth of a diverse group of engineers, and partner closely with Product and Design to ensure the agent-building tools remain intuitive and technically robust. You will champion a "show > tell" culture to ensure rapid shipping while maintaining high technical stability and user experience standards, and clear technical and operational roadblocks to enable the team to operate with high agency and clarity. You will act as the bridge between product vision and technical execution.
AI Tooling Frontend Engineer - Helix Team
Design and build intuitive web interfaces for robot data annotation, datasets visualization, and experiment tracking. Utilize data-driven techniques to optimize interfaces for efficiency and fast iteration cycles. Integrate AI models to automate manual tasks. Work together with AI researchers, robot operators, and annotators to support new user experiences.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.