Location
San Francisco United States
San Francisco United States
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
0
USD
150000
-
230000
Date posted
October 10, 2025
Job type
Full-time
Experience level
Mid level

Job Description

ABOUT BASETEN

Baseten powers inference for the world's most dynamic AI companies, like OpenEvidence, Clay, Mirage, Gamma, Sourcegraph, Writer, Abridge, Bland, and Zed. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. With our recent $150M Series D funding, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we’re scaling our team to meet accelerating customer demand.

THE ROLE:

Baseten’s Model Performance (MP) team is responsible for ensuring the models running on our platform are fast, reliable, and cost‑efficient. As part of this team, you’ll focus on Model API's — the infrastructure powering our hosted API endpoints for the latest open‑source models. This work spans distributed systems, model serving, and developer experience. You’ll join a small, high‑impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale.

RESPONSIBILITIES:

  • Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving

  • Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups

  • Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving

  • Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations

  • Productionize performance improvements across runtimes (e.g.TensorRT, TensorRT‑LLM): speculative decoding, quantization, batching, and KV‑cache reuse.

  • Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality.

  • Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.

  • Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.

REQUIREMENTS:

  • 3+ years experience building and operating distributed systems or large‑scale APIs.

  • Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).

  • Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.

  • Comfortable debugging complex systems, from runtime internals to GPU execution traces.

  • Strong written communication; able to produce clear design docs and collaborate across functions.

NICE TO HAVE:

  • Experience with LLM runtimes (vLLM, SGLang, TensorRT‑LLM) or contributions to open-source inference engines (vLLM, TensorRT-LLM, SGLang, TGI)

  • Knowledge of Kubernetes, service meshes, API gateways, or distributed scheduling.

  • Background in developer‑facing infrastructure or open‑source APIs.

  • We value infra‑leaning generalists who bring strong engineering fundamentals and curiosity. ML experience is a plus, but not required.

BENEFITS

  • Competitive compensation package.

  • This is a unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.

  • An inclusive and supportive work culture that fosters learning and growth.

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.


At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

Apply now
Baseten is hiring a Software Engineer - Model API's. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
101-200
employees
Founded in
Headquaters
San Francisco, CA, United States
Country
United States
Industry
Computer Software
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

NO.svg
Norway

Software Engineering Manager, Creator Experience

Full-time
Software Engineer
US.svg
United States

Senior Technical Support Engineer

Full-time
Software Engineer
US.svg
United States

Software Engineer, Agentic AI

Full-time
Software Engineer
CA.svg
Canada

Senior iOS Engineer

Full-time
Software Engineer
US.svg
United States

Senior Staff Software Engineer

Full-time
Software Engineer
US.svg
United States

Senior Software Engineer, Augmented Reality, Intelligence Systems

Full-time
Software Engineer
Open Modal