Work With Us
At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.
We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.
This Role Is For You If:
You have experience building large-scale production stacks for model serving.
You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques.
You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy.
You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack.
Desired Experience:
PyTorch
Python
Model-serving frameworks (e.g. TensorRT, vLLM, SGLang)
What You'll Actually Do:
Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs).
Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference.
Profile and robustify the stack for different batching and serving requirements.
Build and scale pipelines for test-time compute.
What You'll Gain:
Hands-on experience with state-of-the-art technology at a leading AI company.
Deeper expertise in machine learning systems and efficient large model inference.
Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models.
A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.
About Liquid AI
Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.