Staff Software Engineer, Cloud Infrastructure
Define scalable, top-down system architectures that unify CPU and AI technologies for next-generation automotive applications. Shape the architectural direction of the automotive and robotics portfolio to meet industry standards for performance, safety, reliability, and security. Lead technical efforts in architectural planning and execution for automotive and robotics SoCs. Collaborate cross-functionally across architecture, software, design verification (DV), and product teams to drive innovation. Communicate technical direction effectively across engineering teams and external partners. Identify future use cases and propose next-generation architectural solutions within a fast-moving, technical environment.
Senior Manager, CX Operations
The AI Outcomes Manager will partner with executive sponsors and end users to identify high-impact use cases and turn them into measurable business outcomes on Glean. They will lead strategic reviews and advise customers on their AI roadmap to ensure maximum value from Glean's platform. Responsibilities include translating business needs into clear problem statements, success metrics, and practical AI solutions, collaborating with Product and R&D to shape priorities. They will conduct discovery workshops, scope pilots, and guide rollouts to drive breadth and depth of adoption of the Glean platform. Additionally, they design and build AI agents with and for customers, rethinking and redesigning underlying business processes to maximize impact and usability. The role involves proactively identifying expansion opportunities and driving engagement across teams and functions.
National Security & Technology Policy Fellow
The role involves partnering closely with ML teams, scoping and pitching solutions to top AI labs, and translating research needs related to post-training, evaluations, and alignment into clear product roadmaps and measurable outcomes. The fellow will drive end-to-end delivery by collaborating with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements while coordinating with engineering, operations, and finance to convert cutting-edge research into deployable, high-impact solutions. Responsibilities include working with client-side researchers to build the primitives, data, and tooling needed for post-training and safety/alignment; partnering with frontier labs to address hard, open-ended technical problems related to model improvement and deployment; shaping and proposing model improvement work through technically rigorous proposals; leading the full lifecycle of products including discovery, prioritization, experimentation, and scaling pilots into repeatable offerings; running complex technical working sessions with senior stakeholders; defining success metrics and managing risks; collaborating cross-functionally with research, platform, operations, security, and finance teams; and designing and implementing robust evaluation frameworks, including benchmarks and feedback loops.
People Data & Operations Manager
Conduct original research while observing how ideas move through a high-growth startup's Go-To-Market motion to create measurable impact; work closely with Snorkel researchers on open-ended projects producing clear research outputs such as experiments, prototypes, internal writeups, and potentially publications; innovate human-AI interaction by designing new paradigms for distilling human expertise into model behavior; collaborate with leading labs to develop data strategies that enable next-generation agentic, reasoning, and multi-modal models; engage in projects including synthetic data generation and filtering, evaluation datasets and benchmarks for LLM/RAG/agent behavior, data-centric methods for improving reliability, calibration, and failure-mode coverage, and evaluating HITL data annotation processes and improvements.
Research-Hardware Codesign Engineer
The Research-Hardware Codesign Engineer is responsible for working at the intersection of model research and silicon/system architecture to shape the numerics, architecture, and technology decisions for future OpenAI silicon. Responsibilities include building on the roofline simulator to track workloads and analyze the impact of system architecture decisions, debugging discrepancies between performance simulations and real measurements with clear communication of root causes, writing emulation kernels for low-precision numerics and lossy compression schemes, prototyping numeric modules through RTL synthesis, and occasionally owning an RTL module end-to-end. The engineer will proactively bring in new machine learning workloads to prototype and evaluate opportunities or risks, understand the full scope from ML science to hardware optimization, break down objectives into near-term deliverables, facilitate cross-team collaborations, and clearly communicate design tradeoffs with supporting evidence.
Research Engineer, AI for Science
Design, implement, and improve large-scale distributed machine learning systems; write robust, high-quality machine learning code and contribute to performance-critical components; collaborate closely with researchers to translate ideas into scalable, production-ready systems.
Member of Technical Staff - Open Source Lead
Own the development of Reflection’s open post-training, inference, and deployment ecosystem, creating the standard for how the community customizes and interacts with the models. Build the RL and SFT tooling that external developers use to customize, fine-tune, and align models, as well as lead inference and deployment. Extend or integrate with existing best-in-class frameworks to meet developers where they are. Create Reflection-native libraries for performance or flexibility, ensuring a clean, powerful, production-grade toolkit for open-weight users. Drive adoption of models by reducing friction in the fine-tuning process, ensuring adaptation is safe, efficient, and scalable. Engage deeply with the open-source community to incorporate feedback and guide the roadmap of external-facing tools.
Member of Technical Staff - Safety Lead
Own the red-teaming and adversarial evaluation pipeline for Reflection’s models, continuously probing for failure modes across security, misuse, and alignment gaps. Work hand-in-hand with the Alignment team to translate safety findings into concrete guardrails, ensuring models behave reliably under stress and adhere to deployment policies. Validate that every release meets the lab’s risk thresholds before it ships, serving as a critical gatekeeper for open weight releases. Develop scalable, automated safety benchmarks that evolve alongside model capabilities, moving beyond static datasets to dynamic adversarial testing. Research and implement state-of-the-art jailbreaking techniques and defenses to stay ahead of potential vulnerabilities in the wild.
Member of Technical Staff - Alignment Lead
Drive the entire alignment stack, including instruction tuning, RLHF, and RLAIF, to push the model toward high factual accuracy and robust instruction following. Lead research efforts to design next-generation reward models and optimization objectives that improve human preference performance. Curate high-quality training data and design synthetic data pipelines addressing complex reasoning and behavioral gaps. Optimize large-scale reinforcement learning pipelines for stability and efficiency, enabling rapid model iteration cycles. Collaborate closely with pre-training and evaluation teams to create feedback loops that translate alignment research into generalizable model improvements.
Senior Director and AGC, Product Legal (Privacy, IP, Employment)
The role involves partnering closely with ML teams and leading AI research teams and core customers to scope, pilot, and iterate on frontier model improvements. Responsibilities include translating research needs into clear product roadmaps and measurable outcomes, working hands-on with AI teams and frontier labs to tackle complex technical problems relating to model improvement, performance, and deployment. The position requires shaping and proposing model improvement work, translating customer and research objectives into technically rigorous proposals and execution plans, and collaborating on designing data, primitives, and tooling required to improve frontier models in practice. The candidate will own the end-to-end lifecycle of projects, including leading discovery, writing product requirement documents and technical specifications, prioritizing trade-offs, running experiments, shipping initial solutions, and scaling pilots into repeatable offerings. They must lead complex, high-stakes engagements, manage technical working sessions with senior stakeholders, define success metrics, identify risks early, and drive programs to measurable outcomes. Additionally, the role requires collaboration with research, platform, operations, security, and finance teams to deliver production-grade results and building robust evaluation frameworks to improve technical execution across accounts.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.