Docker AI Jobs

Discover the latest remote and onsite Docker AI roles across top active AI companies. Updated hourly.

Check out 252 new Docker AI roles opportunities posted on The Homebase

Delivery Engineer

New
Top rated
PhysicsX
Full-time
Full-time
Posted

Take part in building a platform used by Data Scientists and Simulation Engineers to build, train and deploy Deep Physics Models. Work on a focused, stream-aligned and cross-functional team (back-end, front-end, design) empowered to make its implementation decisions towards meeting its objectives. Gather and leverage domain knowledge and experience from the Data Scientists and Simulation Engineers using your product.

Undisclosed

()

Singapore
Maybe global
Hybrid
Python
Go
Docker
Kubernetes
CI/CD

Software Engineer, AI Video Agent

New
Top rated
Opusclip
Full-time
Full-time
Posted

You will be building a new team in the US to develop the next generation smart AI video maker that can ingest user's content and compose quality videos for social media. You will work closely with product and marketing teams to quickly prototype, beta test, and produce the final version of this product using agent technology. The technology stack includes GCP, Typescript, Python, Redis, MongoDB, Cloud Storage, and various AI models. You will be involved in rushing prototype and production versions of this product, contributing to an innovative and ambitious project.

$142,000 – $213,000
Undisclosed
YEAR

(USD)

Palo Alto, United States
Maybe global
Onsite
TypeScript
Python
GCP
Prompt Engineering
AI

AI Engineer (New Graduate)

New
Top rated
Distyl
Full-time
Full-time
Posted

As an AI Engineer (New Graduate) at Distyl, you will design, implement, and deploy GenAI applications under the guidance of senior engineers, contributing to prompt design, agent logic, retrieval-augmented generation (RAG), and model evaluation to build full-stack AI applications that deliver measurable business value. You will gain exposure to customer-facing work by shadowing technical conversations and learning how business needs are translated into system design, with opportunities to take on more responsibility in technical decisions and implementation. You will partner with senior engineers to understand customer problems and translate requirements into technical solutions, participate in customer discussions, solution design sessions, and iterative delivery. Additionally, you will help improve Distillery, Distyl’s internal LLM application platform, by building reusable components, tools, and workflows and learn best practices for scalable, maintainable AI infrastructure. You will write clean, well-tested, observable production-quality code that meets reliability, performance, and security standards and learn how production AI systems are monitored, debugged, and improved over time. You will assist with evaluating AI systems across accuracy, latency, cost, and robustness, applying user feedback and metrics to improve system performance. Finally, you will continuously develop your skills in LLMs, software engineering, and AI through mentorship, code reviews, and hands-on project work, learning modern development workflows and deployment practices used in enterprise AI.

Undisclosed

()

New York, United States
Maybe global
Hybrid
Python
TypeScript
Prompt Engineering
RAG
LangChain

Software Engineer, ML Data Infrastructure

New
Top rated
Ideogram
Full-time
Full-time
Posted

The Software Engineer, ML Data Infrastructure will collaborate with engineers to build AI design experiences, tackle complex technical challenges including scaling distributed systems, build robust data infrastructure for foundation models at petabyte scale ensuring reliability and performance across multi-modal training pipelines, optimize data processing workflows for massive throughput, work with distributed systems, TPU infrastructure, and large-scale storage solutions, and partner with research scientists to translate data requirements into production-grade systems that accelerate model development cycles.

Undisclosed

()

Toronto, Canada
Maybe global
Onsite
Python
Kubernetes
GCP
Docker
Data Pipelines

Software Engineer, Codex for Teams

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Software Engineer on the Codex for Teams team, you will be responsible for shaping the evolution of Codex by identifying how teams actually use and sometimes break AI-powered software engineering tools, driving changes across product, infrastructure, and model behavior to make Codex a reliable teammate for organizations. You will build core team and enterprise primitives that enable Codex to scale, including role-based access control (RBAC), admin and audit surfaces, usage and rate limits, pricing controls, managed configuration and constraints, and analytics for deep visibility into Codex usage. You will design and own secure, observable, full-stack systems that power Codex across web, IDEs, CLI, and CI/CD environments, integrating with enterprise identity and governance systems (SSO/SAML/OIDC, SCIM, policy enforcement) and developing data-access patterns that are performant, compliant, and trustworthy. The role involves leading real-world deployments and launches by working directly with customers and the Go To Market team to roll out Codex, using live usage and operational feedback to rapidly iterate and improve the product and platform capabilities. This position owns systems end-to-end, from architecture and implementation to production operations, emphasizing quality and velocity.

$255,000 – $325,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite
Python
Go
Docker
Kubernetes
CI/CD

Solutions Engineer (AI/ML, Pre-Sales)

New
Top rated
DatologyAI
Full-time
Full-time
Posted

The Solutions Engineer (AI/ML, Pre-Sales) will work closely with strategic customers to understand their data curation needs, business challenges, and technical requirements. The role involves leading end-to-end customer proofs of concept (PoCs) that connect data curation to training behavior and evaluation outcomes, including dataset analysis, training plan design, and interpreting results. They will partner with customer machine learning teams to map data and curation strategies, design and execute evaluation plans for base and post-trained models, select appropriate benchmarks and metrics, and run model evaluations. Additionally, the engineer will produce customer-ready evaluation reports detailing methodology, metrics, baselines, ablations (e.g., curated vs raw data), conclusions, and recommendations for productionization. They must communicate technical results effectively to both ML experts and executive stakeholders, explaining tradeoffs in compute, latency, and deployment cost. Collaboration with go-to-market, engineering, and research teams is essential to deliver compelling demos, align on requirements, and incorporate customer insights into model training and product strategies. The role also includes providing technical guidance, training, and documentation to enable prospects to confidently assess the solution.

$230,000 – $300,000
Undisclosed
YEAR

(USD)

Redwood City, United States
Maybe global
Onsite
Python
PyTorch
Hugging Face
Distributed Training
Cloud Platforms

Senior Software Engineer, Applied AI

New
Top rated
Lumi AI
Full-time
Full-time
Posted

As a Software Engineer working on AI systems, responsibilities include playing a foundational role in research, experimentation, and rapid improvement of AI systems to build a capable, reliable AI automation platform used worldwide in mission critical production environments. Tasks involve designing experiments and testing ideas to optimize key internal AI benchmarks, designing and improving evaluation frameworks to accelerate experimentation speed and direction, training, fine-tuning, and optimizing machine learning models, performing rigorous evaluation and testing for model accuracy, generalization, and performance, collaborating and contributing to core product development to enhance platform capabilities, and setting up observability and monitoring systems to safety check model behavior in critical settings.

$170,000 – $250,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Onsite
Python
C++
Model Evaluation
MLOps
Docker

Product Security Applied AI Intern, Summer 2026

New
Top rated
Crusoe
Intern
Full-time
Posted

Assist in designing and implementing custom large language models (LLMs) and fine-tuning models for specific tasks. Build and experiment with agent libraries and workflow orchestration frameworks. Explore neo-cloud technologies, containerized environments, and virtualized infrastructure. Learn and apply security and privacy best practices in AI pipelines and deployments. Collaborate with the team to document, test, and optimize agent behaviors and models. Participate in knowledge sharing and mentorship sessions to gain exposure to AI, cloud, and security tradecraft.

$1,905 – $1,905 / week
Undisclosed
WEEK

(USD)

San Francisco, United States
Maybe global
Onsite
Python
PyTorch
TensorFlow
OpenAI API
Hugging Face

Lead Machine Learning Engineer

New
Top rated
Faculty
Full-time
Full-time
Posted

Set the technical direction for complex machine learning projects, balancing trade-offs and guiding team priorities. Design, implement, and maintain reliable, scalable ML and software systems while justifying key architectural decisions. Define project problems, develop roadmaps, and oversee delivery across multiple workstreams in often ill-defined, high-risk environments. Drive the development of shared resources and libraries across the organisation and guide other engineers in contributing to them. Lead hiring processes, make informed selection decisions, and mentor multiple individuals to foster team growth. Proactively develop and execute recommendations for adopting new technologies and changing ways of working to stay competitive. Act as a technical expert and coach for customers, accurately estimate large workstreams, and defend rationale to stakeholders.

Undisclosed

()

London, United Kingdom
Maybe global
Hybrid
Python
Scikit-learn
TensorFlow
PyTorch
Docker

Software Engineer, macOS Core Product - Palm Coast, USA

New
Top rated
Speechify
Full-time
Full-time
Posted

Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for diverse use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability, and design and implement solutions to address the highest priority issues.

$140,000 – $200,000
Undisclosed
YEAR

(USD)

Palm Coast, United States
Maybe global
Remote
Python
GCP
Docker
Kubernetes
MLflow

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Docker AI jobs?","answer":"Docker AI jobs involve developing, deploying, and maintaining AI applications using containerization technology. These positions focus on creating reproducible AI workflows, packaging machine learning models with dependencies, and ensuring consistent execution across environments. Professionals in these roles typically work on MLOps pipelines, containerized AI applications, and implement solutions that seamlessly transition from development to production."},{"question":"What roles commonly require Docker skills?","answer":"Machine Learning Engineers, Data Scientists, AI Developers, and DevOps Engineers working on AI systems commonly require containerization skills. These professionals use containers to package models, ensure reproducibility, and streamline deployment pipelines. Full-stack developers building AI-powered applications and MLOps specialists implementing continuous integration workflows also frequently need proficiency with containerized environments and deployment strategies."},{"question":"What skills are typically required alongside Docker?","answer":"Alongside containerization expertise, employers typically seek proficiency in AI frameworks like TensorFlow, PyTorch, and Hugging Face. Familiarity with Docker Compose for multi-container applications, version control systems, and CI/CD pipelines is essential. Additional valuable skills include YAML configuration, cloud deployment knowledge, GPU acceleration techniques, and experience with MLOps practices that facilitate model development, testing, and production deployment."},{"question":"What experience level do Docker AI jobs usually require?","answer":"AI positions requiring containerization skills typically seek mid-level professionals with 2-4 years of practical experience. Entry-level roles may accept candidates with demonstrated proficiency in basic container commands, Dockerfile creation, and image management. Senior positions often demand extensive experience integrating containers into production ML pipelines, optimizing container resources, and implementing advanced deployment strategies across cloud and edge environments."},{"question":"What is the salary range for Docker AI jobs?","answer":"Compensation for AI professionals with containerization expertise varies based on location, experience level, industry, and additional technical skills. Junior roles typically start at competitive market rates, while senior positions command premium salaries. The most lucrative opportunities combine deep learning expertise, container orchestration experience, and cloud platform knowledge. Specialized industries like finance or healthcare often offer higher compensation for these in-demand skill combinations."},{"question":"Are Docker AI jobs in demand?","answer":"Containerization skills remain highly sought after in AI development, with strong demand driven by organizations implementing MLOps practices and scalable AI deployment strategies. Recent partnerships like Anaconda-Docker and trends in serverless AI containers have intensified hiring needs. The emergence of specialized tools like Docker Model Runner, Docker Offload, and Docker AI Catalog reflects the growing importance of containerized workflows in modern AI development and deployment practices."},{"question":"What is the difference between Docker and Kubernetes in AI roles?","answer":"In AI roles, containerization focuses on packaging individual applications with dependencies for consistent execution, while Kubernetes orchestrates multiple containers at scale. ML engineers might use Docker to create reproducible model environments but implement Kubernetes to manage production deployments across clusters. While containerization handles the model packaging, Kubernetes addresses the scalability, load balancing, and automated recovery needed for production AI systems serving multiple users simultaneously."}]