Engineering Manager - Engine and Platform
The Engineering Manager for the Engine and Platform leads the team responsible for building, maintaining, and deploying the runtime for customers to run, manage, secure, and understand AI tools, enabling advanced agentic use-cases. This role involves scaling the team owning the development of the platform and services, which includes distributed systems engineers and authorization/identity experts developing features like MCP gateways, roles and permissions, and platform-as-service capabilities for tool executions. The manager ensures the team is unblocked, aligns the team's work with the product organization, and stays technically engaged through code reviews, critical contributions, and occasional hands-on coding. Responsibilities include owning deliverables, stability, and uptime, shaping product vision and architecture, owning technical direction and prioritization, hiring and mentoring engineers, defining and delivering platform features, and ensuring reliability, security, and enterprise readiness. The manager also focuses on building leverage into systems through automation and agents to improve efficiency and is expected to navigate ambiguity and evolving standards in AI tools.
Engineering Manager - Tool Development and Developer Experience
As the Engineering Manager for Tool Development & Developer Experience, you will lead the team responsible for the MCP framework, tool catalog, and systems enabling customers to build tools. You will be ultimately responsible for the team's deliverables, stability, and uptime while aligning the team’s work with the product organization and shaping the team's and company’s roadmap. You will hire and mentor engineers, define and deliver new MCP servers, ship high-impact features ensuring reliability, security, and enterprise readiness, and build leverage into the system by automating tasks. While primarily leading people, product, and operations, you are expected to stay technically engaged through reviews, critical-path contributions, and occasional coding to unblock the team. The role involves navigating ambiguity, evolving AI tool standards, and managing scaling challenges.
2026 New Grad | Software Engineer, Full-Stack
Ship critical infrastructure managing real-world logistics and financial data for large enterprises. Own the why by building deep context through customer calls and understanding Loop's value to customers, pushing back on requirements if better solutions exist. Work full-stack across system boundaries including frontend UX, LLM agents, database schema, and event infrastructures. Leverage AI tools to handle routine tasks enabling focus on quality, architecture, and product taste. Constantly optimize development loops, refactor legacy patterns, automate workflows, and fix broken processes to raise velocity.
New Grad | Software Engineer, AI
Ship critical infrastructure by managing real-world logistics and financial data for the largest enterprise in the world. Own the why by building deep context through customer calls and understanding Loop’s value to customers, pushing back on requirements if there is a better, faster way to solve problems. Work with full-stack proficiency across system boundaries, from frontend UX to LLM agents, database schema, and event infrastructures. Leverage AI tools to handle the boilerplate work so focus can be on quality, architecture, and product taste. Constantly optimize development loops, refactor legacy patterns, automate workflows, and fix broken processes to raise the velocity bar.
Software Engineer, Full Stack
As a Full Stack Software Engineer at Replicant, you will design and deliver technology that powers natural, human-like conversations at scale to help companies reduce wait times, improve customer satisfaction, and empower representatives to focus on complex problems. You will build rich user experiences and backend services that enable customers to design, launch, and monitor AI-powered conversations. Responsibilities include building new features for Replicant's core AI voice and chat products handling millions of daily conversations, shipping full stack end-to-end features quickly, integrating automatic speech recognition, text to speech, and conversational AI model improvements into products, refactoring, optimizing, and debugging production systems balancing latency, cost, and user experience, participating in regular on-call rotations monitoring live systems, continuously improving systems based on performance metrics and customer feedback, shaping a culture emphasizing knowledge sharing and mentorship across distributed systems and enterprise-scale AI design, and participating in team and company-wide office events with travel required.
Applied AI Engineer
You will be responsible for integrating large language models (LLMs) into software products at Zapier, including setting up necessary infrastructure to ensure performance, scalability, and reliability. You will understand and develop AI-based applications that rely on data-driven feedback loops to capture and instrument user data, synthesize core use cases, and implement and test strategies with LLMs to enhance performance. Your role involves building tooling and infrastructure to enable teams to iterate on AI products faster without sacrificing safety or reliability. You will monitor the performance and health of AI systems to proactively detect and address issues such as system failures and performance degradation. Additionally, you will collaborate with Data and cross-functional teams to refine and deploy LLM-based features. You will work mostly in Python or TypeScript, building and shipping production-ready code while balancing speed and quality to meet customer needs, and improving cost observability and optimization across teams.
Peak Health - Software Engineer (Backend-leaning)
Ship production-grade backend and frontend features for core member and provider flows using React, TypeScript, APIs, and data layers, ensuring high polish and reliability. Own features end-to-end, including specification, building, testing, deployment, monitoring, and handling complex state, permissions, and edge cases. Build and maintain robust system hygiene, including instrumentation, dashboards and alerts, CI/CD pipelines, code reviews, and production debugging. Design, implement, and maintain AI-powered workflows comprising tool/function calling, structured outputs, Retrieval-Augmented Generation (RAG), evals, tracing, observability, prompt versioning, and guardrails. Build and operate workflow and agent flows using orchestration patterns similar to Temporal, Dagster, or Airflow, managing retries, idempotency, asynchronous job queues, and failure handling. Collaborate closely with cross-functional partners to deliver reliable, scalable, and user-centric healthcare products.
Applied AI Engineer, Fullstack Software Engineer - Singapore
Collaborate closely with researchers, AI engineers, and product engineers on complex customer projects, integrating cutting-edge AI models into clients' software products. Design, develop, and maintain scalable and robust full-stack applications, ensuring seamless integration between front-end and back-end systems. Develop complex use cases with customers, providing guidance and ensuring the best production integration with back-end and front-end interfaces. Collaborate with product and science teams to continuously improve product and model capabilities based on customer feedback.
Evaluations - Platform Engineer
Own the evaluation stack by building online and offline evaluation pipelines that measure agent quality across ephemeral, voluminous MELT data, code, and unstructured documents, and set metrics defining the experience. Define quality at scale by designing evaluations that capture trajectory quality in production incidents spanning hundreds of services with ephemeral, high-volume, and approximative ground truth, ensuring metrics predict real outcomes. Build platform abstractions for agents by designing core agent architectures and extending internal frameworks such as sub-agents, MCPs, and middleware to enable confident iteration and faster shipping with product, platform, and research teams. Productionize these systems by owning latency, observability, and uptime.
Evaluation Engineer
The Evaluation Engineer will own the technical foundation of the auto-evaluation systems by building a comprehensive system that runs fast, is easy to use, and supports quickly building new evaluations. Responsibilities include improving the speed of the basic evals infrastructure with minimal latency, designing interfaces suitable for ML engineers, product managers, and customers, and ensuring the system architecture allows team members to easily add examples and run evaluations. The role also involves ensuring evaluations are accurate and reliable by encoding knowledge about how pharma customers make decisions, providing appropriate statistical tests, and confidence intervals for trustworthy results. Additionally, the engineer is expected to spend most time on the core eval platform, collaborate with the evals team on specific evals, mentor an evals engineering intern, and learn how users interact with the evaluation system to improve it.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.