Member of Technical Staff - ML Inference Engineer, Pytorch

US.svg
United States
Location
United States
United States
United States
United States
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
-
Date posted
July 29, 2025
Job type
Full-time
Experience level
Mid level

Job Description

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

This Role Is For You If:

  • You have experience building large-scale production stacks for model serving.

  • You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques.

  • You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy.

  • You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack.

Desired Experience:

  • PyTorch

  • Python

  • Model-serving frameworks (e.g. TensorRT, vLLM, SGLang)

What You'll Actually Do:

  • Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs).

  • Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference.

  • Profile and robustify the stack for different batching and serving requirements.

  • Build and scale pipelines for test-time compute.

What You'll Gain:

  • Hands-on experience with state-of-the-art technology at a leading AI company.

  • Deeper expertise in machine learning systems and efficient large model inference.

  • Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models.

  • A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Apply now
Liquid AI is hiring a Member of Technical Staff - ML Inference Engineer, Pytorch. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
51-100
employees
Founded in
2023
Headquaters
Cambridge, MA, United States
Country
United States
Industry
Information Services
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

GE.svg
Germany

Senior AI Engineer (Europe remote - TS/Vue/NodeJS)

Full-time
Machine Learning Engineer
US.svg
United States

Manager Forward Deployed Engineering

Full-time
Machine Learning Engineer
US.svg
United States

Lead Cyber Security Evaluation Expert

Full-time
Machine Learning Engineer
US.svg
United States

AI Solutions Intern

Intern
Machine Learning Engineer
No items found.

Staff Machine Learning Engineer

Full-time
Machine Learning Engineer
US.svg
United States

Secret Application for Member of Technical Staff

Full-time
Machine Learning Engineer
Open Modal