Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.
About the role
As a Software Engineer on the Inference & RL Systems team, you will design and operate the distributed systems that serve our models in production and power large-scale post-training workflows.
This role sits at the boundary between model execution and distributed infrastructure. You will work on systems that determine inference latency, throughput, stability, and the reliability of RL and post-training training loops.
Magic’s long-context models introduce demanding execution constraints: KV-cache scaling, memory pressure under long sequences, batching trade-offs, long-horizon trajectory rollouts, and sustained throughput under real-world workloads. You will own the infrastructure that makes both production inference and large-scale RL iteration fast and reliable.
What you’ll work on
Design and scale high-performance inference serving systems
Optimize KV-cache management, batching strategies, and scheduling
Improve throughput and latency for long-context workloads
Build and maintain distributed RL and post-training infrastructure
Improve reliability of rollout, evaluation, and reward pipelines
Automate fault detection and recovery for serving and RL systems
Profile and eliminate performance bottlenecks across GPU, networking, and storage layers
Collaborate with Kernels and Research to align execution systems with model architecture
What we’re looking for
Strong software engineering and distributed systems fundamentals
Experience building or operating large-scale inference or training systems
Deep understanding of GPU execution constraints and memory trade-offs
Experience debugging performance issues in production ML systems
Ability to reason about system-level trade-offs between latency, throughput, and cost
Track record of owning critical production infrastructure
Compensation, benefits, and perks (US):
Annual salary range: $225K - $550K
Equity is a significant part of total compensation, in addition to salary
401(k) plan with 6% salary matching
Generous health, dental and vision insurance for you and your dependents
Unlimited paid time off
Visa sponsorship and relocation stipend to bring you to SF, if possible
A small, fast-paced, highly focused team
Magic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.
Our culture
Integrity. Words and actions should be aligned
Hands-on. At Magic, everyone is building
Teamwork. We move as one team, not N individuals
Focus. Safely deploy AGI. Everything else is noise
Quality. Magic should feel like magic


