Software Engineer, Codex Runtime
The responsibilities include shaping the evolution of Codex by identifying how teams use and break AI-powered software engineering, driving changes across product, infrastructure, and model behavior to improve reliability. Building core team and enterprise primitives to enable Codex usability at scale, such as container orchestration, virtual machine provisioning/configuration, execution sandboxes, shared block storage, RBAC, admin and audit surfaces, usage and pricing controls, managed configuration and constraints, and analytics for visibility into Codex usage. Designing and owning secure, observable, full-stack systems that power Codex across web, IDEs, CLI, and CI/CD, integrating with enterprise identity and governance systems (SSO/SAML/OIDC, SCIM, policy enforcement), and developing data-access patterns that are performant, compliant, and trustworthy. Leading real-world deployments and launches by working with customers and go-to-market teams to roll out Codex across teams, using live usage and operational signals to iterate and improve the product and platform based on real-world feedback.
Freelance Software Developer (Kotlin) - AI Trainer
As an AI Tutor in Coding specializing in Kotlin development, the responsibilities include designing high-quality technical content, examples, and explanations demonstrating best practices in Kotlin development; collaborating with engineers to ensure accuracy and consistency across code samples, tutorials, and developer guides; exploring modern Kotlin frameworks and tools to create practical, real-world examples for learning and testing; and continuously refining content based on feedback, emerging patterns, and advances in the Kotlin ecosystem. The role also involves contributing to projects aligned with skills by creating training prompts and refining model responses to help shape the future of AI while ensuring technology benefits everyone.
MEP Manager, Data Centers
Develop novel architectures, system optimizations, optimization algorithms, and data-centric optimizations that significantly improve over state-of-the-art. Take advantage of the computational infrastructure of Together to create the best open models in their class. Understand and improve the full lifecycle of building open models; release and publish insights through blogs, academic papers, etc. Collaborate with cross-functional teams to deploy models and make them available to a wider community and customer base. Stay up-to-date with the latest advancements in machine learning.
AI / ML Solutions Engineer
The AI / ML Solutions Engineer at Anyscale is responsible for designing, implementing, and scaling machine learning and AI workloads using Ray and Anyscale directly with customers. This includes implementing production AI / ML workloads such as distributed model training, scalable inference and serving, and data preprocessing and feature pipelines. The role involves working hands-on with customer codebases to refactor or adapt existing workloads to Ray. The engineer advises customers on ML system architecture including application design for distributed execution, resource management and scaling strategies, and reliability, fault tolerance, and performance tuning. They guide customers through architectural and operational changes needed to adopt Ray and Anyscale effectively. Additionally, the engineer partners with customer MLE and MLOps teams to integrate Ray into existing platforms and workflows, supports CI/CD, monitoring, retraining, and operational best practices, and helps customers transition from experimentation to production-grade ML systems. They also enable customer teams through working sessions, design reviews, training delivery, and hands-on guidance, contribute feedback to product, engineering, and education teams, and help develop reference architectures, examples, and best practices based on real customer use cases.
Software Engineer, Codex for Teams
As a Software Engineer on the Codex for Teams team, you will be responsible for shaping the evolution of Codex by identifying how teams actually use and sometimes break AI-powered software engineering tools, driving changes across product, infrastructure, and model behavior to make Codex a reliable teammate for organizations. You will build core team and enterprise primitives that enable Codex to scale, including role-based access control (RBAC), admin and audit surfaces, usage and rate limits, pricing controls, managed configuration and constraints, and analytics for deep visibility into Codex usage. You will design and own secure, observable, full-stack systems that power Codex across web, IDEs, CLI, and CI/CD environments, integrating with enterprise identity and governance systems (SSO/SAML/OIDC, SCIM, policy enforcement) and developing data-access patterns that are performant, compliant, and trustworthy. The role involves leading real-world deployments and launches by working directly with customers and the Go To Market team to roll out Codex, using live usage and operational feedback to rapidly iterate and improve the product and platform capabilities. This position owns systems end-to-end, from architecture and implementation to production operations, emphasizing quality and velocity.
Researcher, Synthetic RL
As a Research Scientist on the Synthetic RL team, you will develop novel reinforcement learning techniques that use synthetic environments and feedback to improve large-scale models. You will research and develop reinforcement learning algorithms, design and run experiments to study training dynamics and model behavior at scale, and collaborate with engineers and researchers to integrate successful approaches into model training pipelines.
Senior Software Engineer, Applied AI
As a Software Engineer working on AI systems, responsibilities include playing a foundational role in research, experimentation, and rapid improvement of AI systems to build a capable, reliable AI automation platform used worldwide in mission critical production environments. Tasks involve designing experiments and testing ideas to optimize key internal AI benchmarks, designing and improving evaluation frameworks to accelerate experimentation speed and direction, training, fine-tuning, and optimizing machine learning models, performing rigorous evaluation and testing for model accuracy, generalization, and performance, collaborating and contributing to core product development to enhance platform capabilities, and setting up observability and monitoring systems to safety check model behavior in critical settings.
Software Engineer - Frontend, Security Products
As a Full-Stack Software Engineer on the Security Products team, you will build, deploy, and maintain applications and systems that bring advanced AI-driven security capabilities to real users. You will work directly with internal and external customers to understand their workflows and translate them into intuitive, powerful product experiences. Your responsibilities include designing and building efficient and reusable frontend systems that support complex web applications, planning and deploying frontend infrastructure necessary for building, testing, and deploying products, collaborating across OpenAI’s product, research, engineering, and security organizations to maximize impact, and helping to shape the engineering culture, architecture, and processes of this new business unit.
Product Security Applied AI Intern, Summer 2026
Assist in designing and implementing custom large language models (LLMs) and fine-tuning models for specific tasks. Build and experiment with agent libraries and workflow orchestration frameworks. Explore neo-cloud technologies, containerized environments, and virtualized infrastructure. Learn and apply security and privacy best practices in AI pipelines and deployments. Collaborate with the team to document, test, and optimize agent behaviors and models. Participate in knowledge sharing and mentorship sessions to gain exposure to AI, cloud, and security tradecraft.
Software Engineer, macOS Core Product - San Francisco, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for various use cases. Deploy and operate the core machine learning inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve performance, latency, throughput, and efficiency of deployed models. Build tools to identify bottlenecks and sources of instability, and design and implement solutions to address the highest priority issues.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.