Work With Us
At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.
We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.
This Role Is For You If:
You have extensive experience building distributed training infrastructure for language and multimodal models, with hands-on expertise in frameworks like PyTorch Distributed, DeepSpeed, or Megatron-LM
You're passionate about solving complex systems challenges in large-scale model training—from efficient multimodal data loading to sophisticated sharding strategies to robust checkpointing mechanisms
You have a deep understanding of hardware accelerators and networking topologies, with the ability to optimize communication patterns for different parallelism strategies
You're skilled at identifying and resolving performance bottlenecks in training pipelines, whether they occur in data loading, computation, or communication between nodes
You have experience working with diverse data types (text, images, video, audio) and can build data pipelines that handle heterogeneous inputs efficiently
Desired Experience:
You've implemented custom sharding techniques (tensor/pipeline/data parallelism) to scale training across distributed GPU clusters of varying sizes
You have experience optimizing data pipelines for multimodal datasets with sophisticated preprocessing requirements
You've built fault-tolerant checkpointing systems that can handle complex model states while minimizing training interruptions
You've contributed to open-source training infrastructure projects or frameworks
You've designed training infrastructure that works efficiently for both parameter-efficient specialized models and massive multimodal systems
What You'll Actually Do:
Design and implement high-performance, scalable training infrastructure that efficiently utilizes our GPU clusters for both specialized and large-scale multimodal models
Build robust data loading systems that eliminate I/O bottlenecks and enable training on diverse multimodal datasets
Develop sophisticated checkpointing mechanisms that balance memory constraints with recovery needs across different model scales
Optimize communication patterns between nodes to minimize the overhead of distributed training for long-running experiments
Collaborate with ML engineers to implement new model architectures and training algorithms at scale
Create monitoring and debugging tools to ensure training stability and resource efficiency across our infrastructure
What You'll Gain:
The opportunity to solve some of the hardest systems challenges in AI, working at the intersection of distributed systems and cutting-edge multimodal machine learning
Experience building infrastructure that powers the next generation of foundation models across the full spectrum of model scales
The satisfaction of seeing your work directly enable breakthroughs in model capabilities and performance
About Liquid AI
Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.