AI Solutions Manager (San Francisco)
The AI Solutions Manager will build new LLM and instrumentation libraries for emerging LLM providers and agent frameworks. They will maintain and enhance existing instrumentation across Python and TypeScript ecosystems, including tools like OpenAI, Anthropic, LlamaIndex, CrewAI, and others. The role involves driving improvements to semantic conventions and OpenTelemetry standards that define AI observability. The manager will collaborate with the global developer community through GitHub, Slack, and conferences, as well as with Arize product managers and solution architects. They will take complex problems from ideation to completion with full ownership and accountability.
Applied AI Engineer – Agentic Workflows (Korea)
Work closely with enterprise customers to translate high-value, ambiguous business problems into well-framed agentic problems with clear success criteria and evaluation methodologies. Provide technical leadership across the full development and evaluation lifecycle, including post-deployment iteration, for agentic workflows. Lead the design, build, and delivery of LLM-powered agents that reason, plan, and act across tools and data sources with enterprise-grade reliability and performance. Balance rapid iteration with enterprise requirements, evolving prototypes into stable, reusable solutions. Define and apply evaluation and quality standards to measure success, failures, and regressions. Debug real-world agent behavior and systematically improve prompts, workflows, tools, and guardrails. Mentor engineers across distributed teams. Drive clarity in ambiguous situations, build alignment, and raise engineering quality across the organization. Contribute to shared frameworks and patterns that enable consistent delivery across customers.
Engineering Manager - Engine and Platform
The Engineering Manager for the Engine and Platform leads the team responsible for building, maintaining, and deploying the runtime for customers to run, manage, secure, and understand AI tools, enabling advanced agentic use-cases. This role involves scaling the team owning the development of the platform and services, which includes distributed systems engineers and authorization/identity experts developing features like MCP gateways, roles and permissions, and platform-as-service capabilities for tool executions. The manager ensures the team is unblocked, aligns the team's work with the product organization, and stays technically engaged through code reviews, critical contributions, and occasional hands-on coding. Responsibilities include owning deliverables, stability, and uptime, shaping product vision and architecture, owning technical direction and prioritization, hiring and mentoring engineers, defining and delivering platform features, and ensuring reliability, security, and enterprise readiness. The manager also focuses on building leverage into systems through automation and agents to improve efficiency and is expected to navigate ambiguity and evolving standards in AI tools.
Engineering Manager - Tool Development and Developer Experience
As the Engineering Manager for Tool Development & Developer Experience, you will lead the team responsible for the MCP framework, tool catalog, and systems enabling customers to build tools. You will be ultimately responsible for the team's deliverables, stability, and uptime while aligning the team’s work with the product organization and shaping the team's and company’s roadmap. You will hire and mentor engineers, define and deliver new MCP servers, ship high-impact features ensuring reliability, security, and enterprise readiness, and build leverage into the system by automating tasks. While primarily leading people, product, and operations, you are expected to stay technically engaged through reviews, critical-path contributions, and occasional coding to unblock the team. The role involves navigating ambiguity, evolving AI tool standards, and managing scaling challenges.
2026 New Grad | Software Engineer, Full-Stack
Ship critical infrastructure managing real-world logistics and financial data for large enterprises. Own the why by building deep context through customer calls and understanding Loop's value to customers, pushing back on requirements if better solutions exist. Work full-stack across system boundaries including frontend UX, LLM agents, database schema, and event infrastructures. Leverage AI tools to handle routine tasks enabling focus on quality, architecture, and product taste. Constantly optimize development loops, refactor legacy patterns, automate workflows, and fix broken processes to raise velocity.
New Grad | Software Engineer, AI
Ship critical infrastructure by managing real-world logistics and financial data for the largest enterprise in the world. Own the why by building deep context through customer calls and understanding Loop’s value to customers, pushing back on requirements if there is a better, faster way to solve problems. Work with full-stack proficiency across system boundaries, from frontend UX to LLM agents, database schema, and event infrastructures. Leverage AI tools to handle the boilerplate work so focus can be on quality, architecture, and product taste. Constantly optimize development loops, refactor legacy patterns, automate workflows, and fix broken processes to raise the velocity bar.
Software Engineer, Full Stack
As a Full Stack Software Engineer at Replicant, you will design and deliver technology that powers natural, human-like conversations at scale to help companies reduce wait times, improve customer satisfaction, and empower representatives to focus on complex problems. You will build rich user experiences and backend services that enable customers to design, launch, and monitor AI-powered conversations. Responsibilities include building new features for Replicant's core AI voice and chat products handling millions of daily conversations, shipping full stack end-to-end features quickly, integrating automatic speech recognition, text to speech, and conversational AI model improvements into products, refactoring, optimizing, and debugging production systems balancing latency, cost, and user experience, participating in regular on-call rotations monitoring live systems, continuously improving systems based on performance metrics and customer feedback, shaping a culture emphasizing knowledge sharing and mentorship across distributed systems and enterprise-scale AI design, and participating in team and company-wide office events with travel required.
Applied AI Engineer
You will be responsible for integrating large language models (LLMs) into software products at Zapier, including setting up necessary infrastructure to ensure performance, scalability, and reliability. You will understand and develop AI-based applications that rely on data-driven feedback loops to capture and instrument user data, synthesize core use cases, and implement and test strategies with LLMs to enhance performance. Your role involves building tooling and infrastructure to enable teams to iterate on AI products faster without sacrificing safety or reliability. You will monitor the performance and health of AI systems to proactively detect and address issues such as system failures and performance degradation. Additionally, you will collaborate with Data and cross-functional teams to refine and deploy LLM-based features. You will work mostly in Python or TypeScript, building and shipping production-ready code while balancing speed and quality to meet customer needs, and improving cost observability and optimization across teams.
Peak Health - Software Engineer (Backend-leaning)
Ship production-grade backend and frontend features for core member and provider flows using React, TypeScript, APIs, and data layers, ensuring high polish and reliability. Own features end-to-end, including specification, building, testing, deployment, monitoring, and handling complex state, permissions, and edge cases. Build and maintain robust system hygiene, including instrumentation, dashboards and alerts, CI/CD pipelines, code reviews, and production debugging. Design, implement, and maintain AI-powered workflows comprising tool/function calling, structured outputs, Retrieval-Augmented Generation (RAG), evals, tracing, observability, prompt versioning, and guardrails. Build and operate workflow and agent flows using orchestration patterns similar to Temporal, Dagster, or Airflow, managing retries, idempotency, asynchronous job queues, and failure handling. Collaborate closely with cross-functional partners to deliver reliable, scalable, and user-centric healthcare products.
Applied AI Engineer, Fullstack Software Engineer - Singapore
Collaborate closely with researchers, AI engineers, and product engineers on complex customer projects, integrating cutting-edge AI models into clients' software products. Design, develop, and maintain scalable and robust full-stack applications, ensuring seamless integration between front-end and back-end systems. Develop complex use cases with customers, providing guidance and ensuring the best production integration with back-end and front-end interfaces. Collaborate with product and science teams to continuously improve product and model capabilities based on customer feedback.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.