Docker AI Jobs

Discover the latest remote and onsite Docker AI roles across top active AI companies. Updated hourly.

Check out 252 new Docker AI roles opportunities posted on The Homebase

Data Engineer | Power

New
Top rated
Gecko Robotics
Full-time
Full-time
Posted

As a Data Engineer, you will build and evolve the data backbone of an AI-first product including document intelligence, time-series IoT data, and agentic AI systems. You will design, implement, and operate data systems across the full lifecycle from raw ingestion to AI-driven outputs used by customers. You will work directly with customers and internal stakeholders to understand problems and translate them into technical solutions, iterating quickly. Responsibilities include building pipelines that support document processing, sensor data, and ML workflows, contributing to feature engineering and model experimentation when needed, and owning systems in production. You will make architectural decisions, improve system reliability over time, and help define best practices as the team and product scale.

$154,000 – $204,000
Undisclosed
YEAR

(USD)

New York, United States
Maybe global
Onsite
Python
MLflow
Docker
Kubernetes
GCP

Senior Software Engineer, Connectivity

New
Top rated
Scale AI
Full-time
Full-time
Posted

The role involves partnering closely with ML teams and AI research teams to translate research needs related to post-training, evaluations, safety/alignment into clear product roadmaps and measurable outcomes. Responsibilities include working hands-on with leading AI teams and frontier research labs to tackle technical problems in model improvement and deployment, shaping and proposing model improvement work by translating objectives into well-defined statements of work and execution plans, and collaborating on designing data, primitives, and tooling required to improve frontier models in practice. The position also requires owning the end-to-end lifecycle of projects, including discovery, writing PRDs and technical specs, prioritizing trade-offs, running experiments, shipping initial solutions, and scaling successful pilots into repeatable offerings. Leading complex, high-stakes engagements by running technical working sessions with senior stakeholders, defining success metrics, surfacing risks early, and driving programs to measurable outcomes is part of the role. Additionally, the role requires partnering closely across research, platform, operations, security, and finance to deliver production-grade results for demanding customers and building rigorous evaluation frameworks such as benchmarks and RLVR to improve technical execution across accounts.

$201,600 – $241,920
Undisclosed
YEAR

(USD)

San Francisco or New York, United States
Maybe global
Onsite
Python
Prompt Engineering
Model Evaluation
MLOps
MLflow

Software Engineer - Sensing, Consumer Products

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Software Engineer on Consumer Products Research, the responsibilities include building and shipping production software for sensing algorithms by translating algorithm prototypes into reliable end-to-end systems, implementing and owning key parts of the Python shipping pipeline including integration surfaces, evaluation hooks, and quality/performance guardrails. The role also involves developing embedded/on-device software in an RTOS environment (such as Zephyr) and deploying models to device runtimes and hardware accelerators. Additional responsibilities include optimizing real-time on-device perception loops for stability, latency, power, and memory constraints, creating data collection and instrumentation tooling to bring up new sensing modalities and accelerate iteration from prototype to dataset to model to device, and partnering cross-functionally with algorithms, human data, firmware/hardware teams to debug, profile, and harden systems against real-world variability.

$325,000 – $325,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid
Python
C++
Docker
Kubernetes
CI/CD

Production Engineer - Maritime

New
Top rated
helsing
Full-time
Full-time
Posted

The role involves developing machine learning and artificial intelligence systems by leveraging and extending state-of-the-art methods and architectures, designing experiments, and conducting benchmarks to evaluate and improve AI performance in real-world scenarios. The candidate will participate in impactful projects and collaborate with multiple teams and backgrounds to integrate cutting-edge ML/AI into production systems. Responsibilities also include ensuring AI software is deployed to production with proper testing, quality assurance, and monitoring.

Undisclosed

()

Plymouth
Maybe global
Onsite
Python
PyTorch
TensorFlow
Reinforcement Learning
MLOps

Senior Software Engineer, ML Core

New
Top rated
Zoox
Full-time
Full-time
Posted

Design, develop, and deploy custom and off-the-shelf ML libraries and toolings to improve ML development, training, deployment, and on-vehicle model inference latency. Build tooling and establish development best practices to manage and upgrade foundational libraries such as Nvidia driver, PyTorch, TensorRT, to improve ML developer experience and expedite debugging efforts. Collaborate closely with cross-functional teams including applied ML research, high-performance compute, advanced hardware engineering, and data science to define requirements and align on architectural decisions. Work across multiple ML teams within Zoox, supporting in- and off-vehicle ML use cases and coordinating to meet the needs of vehicle and ML teams to reduce the time from ideation to productionization of AI innovations.

$214,000 – $290,000
Undisclosed
YEAR

(USD)

Foster City, United States
Maybe global
Onsite
Python
C++
PyTorch
TensorFlow
JAX

Engineering Manager - Engine and Platform

New
Top rated
Arcade.dev
Full-time
Full-time
Posted

The Engineering Manager for the Engine and Platform leads the team responsible for building, maintaining, and deploying the runtime for customers to run, manage, secure, and understand AI tools, enabling advanced agentic use-cases. This role involves scaling the team owning the development of the platform and services, which includes distributed systems engineers and authorization/identity experts developing features like MCP gateways, roles and permissions, and platform-as-service capabilities for tool executions. The manager ensures the team is unblocked, aligns the team's work with the product organization, and stays technically engaged through code reviews, critical contributions, and occasional hands-on coding. Responsibilities include owning deliverables, stability, and uptime, shaping product vision and architecture, owning technical direction and prioritization, hiring and mentoring engineers, defining and delivering platform features, and ensuring reliability, security, and enterprise readiness. The manager also focuses on building leverage into systems through automation and agents to improve efficiency and is expected to navigate ambiguity and evolving standards in AI tools.

$200,000 – $275,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Go
TypeScript
Python
CI/CD
Docker

Engineering Manager - Tool Development and Developer Experience

New
Top rated
Arcade.dev
Full-time
Full-time
Posted

As the Engineering Manager for Tool Development & Developer Experience, you will lead the team responsible for the MCP framework, tool catalog, and systems enabling customers to build tools. You will be ultimately responsible for the team's deliverables, stability, and uptime while aligning the team’s work with the product organization and shaping the team's and company’s roadmap. You will hire and mentor engineers, define and deliver new MCP servers, ship high-impact features ensuring reliability, security, and enterprise readiness, and build leverage into the system by automating tasks. While primarily leading people, product, and operations, you are expected to stay technically engaged through reviews, critical-path contributions, and occasional coding to unblock the team. The role involves navigating ambiguity, evolving AI tool standards, and managing scaling challenges.

$200,000 – $275,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Python
TypeScript
Go
MLOps
Docker

Principal Product Manager – Agentic AI Systems

New
Top rated
Level AI
Full-time
Full-time
Posted

Define and execute product initiatives for agentic AI systems focusing on measurable customer and business outcomes. Own significant parts of the agentic system lifecycle including orchestration, decisioning, evaluation, and iteration. Contribute to building a repeatable framework for launching, evaluating, and improving agentic capabilities across customers. Help define how agentic systems are measured and improved in production balancing autonomy with safety and reliability. Partner closely with Engineering, Applied AI/ML, Design, and Solutions teams to ship production-ready systems. Work directly with customers to understand workflows, requirements, and success criteria. Drive customer-informed prioritization by staying close to live deployments and real usage patterns. Support best practices for agent evaluation, iteration, and safe rollout. Represent the product in customer conversations, demos, and feedback sessions.

Undisclosed

()

Bay Area, United States
Maybe global
Hybrid
Python
AI
LLM
Model Evaluation
MLOps

Software Engineer II (India - Bangalore)

New
Top rated
Giga
Full-time
Full-time
Posted

Engineers at Giga work on problems like building AI agents with almost no hallucination rates, creating a voice experience that is better than talking to humans, and creating self-learning agents that optimize metrics.

₹10,000,000 – ₹11,000,000
Undisclosed
YEAR

(INR)

Bangalore or Bengaluru, India
Maybe global
Onsite
Python
AWS
Google Cloud
Kubernetes
Docker

Software Engineering Manager

New
Top rated
Mirage
Full-time
Full-time
Posted

Oversee the design and operation of the core platform including third-party providers, storage, billing, observability, security, and API. Provide technical leadership for various product and platform features. Improve developer experience to enable the whole team to ship faster. Guide efforts that bridge AI research to production across all modalities such as video, audio, image, and text. Understand the capabilities and limitations of state-of-the-art AI models and leverage them in products. Partner with product, design, and research teams to ensure development aligns with user needs and business objectives.

$250,000 – $350,000
Undisclosed
YEAR

(USD)

New York, United States
Maybe global
Onsite
Python
JavaScript
Java
Docker
Kubernetes

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Docker AI jobs?","answer":"Docker AI jobs involve developing, deploying, and maintaining AI applications using containerization technology. These positions focus on creating reproducible AI workflows, packaging machine learning models with dependencies, and ensuring consistent execution across environments. Professionals in these roles typically work on MLOps pipelines, containerized AI applications, and implement solutions that seamlessly transition from development to production."},{"question":"What roles commonly require Docker skills?","answer":"Machine Learning Engineers, Data Scientists, AI Developers, and DevOps Engineers working on AI systems commonly require containerization skills. These professionals use containers to package models, ensure reproducibility, and streamline deployment pipelines. Full-stack developers building AI-powered applications and MLOps specialists implementing continuous integration workflows also frequently need proficiency with containerized environments and deployment strategies."},{"question":"What skills are typically required alongside Docker?","answer":"Alongside containerization expertise, employers typically seek proficiency in AI frameworks like TensorFlow, PyTorch, and Hugging Face. Familiarity with Docker Compose for multi-container applications, version control systems, and CI/CD pipelines is essential. Additional valuable skills include YAML configuration, cloud deployment knowledge, GPU acceleration techniques, and experience with MLOps practices that facilitate model development, testing, and production deployment."},{"question":"What experience level do Docker AI jobs usually require?","answer":"AI positions requiring containerization skills typically seek mid-level professionals with 2-4 years of practical experience. Entry-level roles may accept candidates with demonstrated proficiency in basic container commands, Dockerfile creation, and image management. Senior positions often demand extensive experience integrating containers into production ML pipelines, optimizing container resources, and implementing advanced deployment strategies across cloud and edge environments."},{"question":"What is the salary range for Docker AI jobs?","answer":"Compensation for AI professionals with containerization expertise varies based on location, experience level, industry, and additional technical skills. Junior roles typically start at competitive market rates, while senior positions command premium salaries. The most lucrative opportunities combine deep learning expertise, container orchestration experience, and cloud platform knowledge. Specialized industries like finance or healthcare often offer higher compensation for these in-demand skill combinations."},{"question":"Are Docker AI jobs in demand?","answer":"Containerization skills remain highly sought after in AI development, with strong demand driven by organizations implementing MLOps practices and scalable AI deployment strategies. Recent partnerships like Anaconda-Docker and trends in serverless AI containers have intensified hiring needs. The emergence of specialized tools like Docker Model Runner, Docker Offload, and Docker AI Catalog reflects the growing importance of containerized workflows in modern AI development and deployment practices."},{"question":"What is the difference between Docker and Kubernetes in AI roles?","answer":"In AI roles, containerization focuses on packaging individual applications with dependencies for consistent execution, while Kubernetes orchestrates multiple containers at scale. ML engineers might use Docker to create reproducible model environments but implement Kubernetes to manage production deployments across clusters. While containerization handles the model packaging, Kubernetes addresses the scalability, load balancing, and automated recovery needed for production AI systems serving multiple users simultaneously."}]