Location
United States
United States
San Francisco
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
0
0
-
0
Job type
December 16, 2025
Job type
Full-time
Experience level
Mid level
Summary this job with AI
Highlight
Highlight

Job Description

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.

We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.

While San Francisco and Boston are preferred, we are open to other remote locations.

This Role Is For You If:

  • You have experience with machine learning at scale

  • You have worked with audio models and understand the effects of architecture choices on runtime, latency, and quality

  • You’re proficient in PyTorch, and familiar with distributed training frameworks like DeepSpeed, FSDP, or Megatron-LM

  • You’ve worked with multimodal data (e.g. audio, text, image, video)

  • You’ve contributed to research papers, open-source projects, or production-grade multimodal model systems

  • You understand how data quality, augmentations, and preprocessing pipelines can significantly impact model performance—and you’ve built tooling to support that

  • You enjoy working in interdisciplinary teams across research, systems, and infrastructure, and can translate ideas into high-impact implementations

Desired Experience:

  • You’ve designed and trained multimodal language models, or specialized audio models (e.g. ASR, TTS, voice conversion, vocoders, diarization)

  • You care deeply about empirical performance, and know how to design, run, and debug large-scale training experiments on distributed GPU clusters

  • You’ve developed audio encoders or decoders, or integrated them into language pretraining pipelines with autoregressive or generative objectives

  • You have experience working with large-scale audio datasets, understand the unique challenges they pose, and can manage massive datasets effectively

  • You have strong programming skills in Python, with an emphasis on writing clean, maintainable, and scalable code

What You'll Actually Do:

  • Invent and prototype new model architectures that optimize inference speed, including on edge devices

  • Build and maintain evaluation suites for multimodal performance across a range of public and internal tasks

  • Collaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large audio datasets

  • Work with the infrastructure team to optimize model training across large-scale GPU clusters

  • Contribute to publications, internal research documents, and thought leadership within the team and the broader ML community

  • Collaborate with the applied research and business teams on client-specific use cases

What You'll Gain:

  • A front-row seat in building some of the most capable Speech Language Models

  • Access to world-class infrastructure, a fast-moving research team, and deep collaboration across ML, systems, and product

  • The opportunity to shape multimodal foundation model research with both scientific rigor and real-world impact

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

Apply now
Liquid AI is hiring a Member of Technical Staff - ML Research Engineer; Multi-Modal - Audio. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
51-100
employees
Founded in
2023
Headquaters
Cambridge, MA, United States
Country
United States
Industry
Information Services
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

US.svg
United States

Member of Technical Staff - ML Research Engineer; Multi-Modal - Audio

Full-time
Machine Learning Engineer
GE.svg
Germany

Founding AI Engineer

Full-time
Machine Learning Engineer
GE.svg
Germany

Founding AI/ML Research Engineer

Full-time
Machine Learning Engineer
GE.svg
Germany

Founding Machine Learning Engineer

Full-time
Machine Learning Engineer
US.svg
United States

Founding AI Engineer - Norm Law

Full-time
Machine Learning Engineer
GE.svg
Germany

Innovation Manager

Full-time
Machine Learning Engineer
Open Modal