Tech Lead, AI Compute Infrastructure

US.svg
United States
CA.svg
Canada
Location
Los Angeles, Palo Alto, San Francisco, Toronto United States, Canada
Los Angeles, Palo Alto, San Francisco, Toronto United States, Canada
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
0
0
-
0
Date posted
October 16, 2025
Job type
Full-time
Experience level
Mid level

Job Description

About HeyGen

At HeyGen, our mission is to make visual storytelling accessible to all. Over the last decade, visual content has become the preferred method of information creation, consumption, and retention. But the ability to create such content, in particular videos, continues to be costly and challenging to scale. Our ambition is to build technology that equips more people with the power to reach, captivate, and inspire audiences.
Learn more at www.heygen.com.  Visit our Mission and Culture doc here

We are seeking a seasoned Technical Leader to build and scale the foundational compute infrastructure that powers our state-of-the-art AI models—from multimodal training data pipelines to high-throughput, low-latency video generation.

Responsibilities

You will be the core engineer responsible for building the robust, efficient, and scalable platform that enables our research and production teams to rapidly iterate on HeyGen's generative video models. Your contributions will directly impact model performance, developer productivity, and the final quality of every AI-generated video.

  • Optimize GPU Utilization: Design and implement mechanisms to aggressively optimize GPU and cluster utilization across thousands of devices for inference, training, data processing and large-scale deployment of our state-of-art video generation models.

  • Develop Large-Scale AI Job Framework: Build highly scalable, reliable frameworks for launching and managing massive, heterogeneous compute jobs, including multi-modal high-volume data ingestion/processing, distributed model training, and continuous evaluation/benchmarking.

  • Enhance Observability: Develop world-class observability, tracing, and visualization tools for our compute cluster to ensure reliability, diagnose performance bottlenecks (e.g., memory, bandwidth, communication).

  • Accelerate Pipelines: Collaborate closely with AI researchers and AI engineers to integrate innovative acceleration techniques (e.g., custom CUDA kernels, distributed training libraries) into production-ready, scalable training and inference pipelines.

  • Infrastructure Management: Champion the adoption and optimization of modern cloud and container technologies (Kubernetes, Ray) for elastic, cost-efficient scaling of our distributed systems.

Minimum Requirements

We are looking for a highly motivated engineer with deep experience operating and optimizing AI infrastructure at scale.

  • Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.

  • 5+ years of full-time industry experience in large-scale MLOps, AI infrastructure, or HPC systems.

  • Experience with data frameworks and standards like Ray, Apache Spark, LanceDB

  • Strong proficiency in Python and a high-performance language such as C++  for developing core infrastructure components.

  • Deep understanding and hands-on experience with modern orchestration and distributed computing frameworks such as Kubernetes and Ray.

  • Experience with core ML frameworks such as PyTorch, TensorFlow, or JAX.

Preferred Qualifications

  • Master's or PhD in Computer Science or a related technical field.

  • Demonstrated Tech Lead experience, driving projects from conceptual design through to production deployment across cross-functional teams.

  • Prior experience building infrastructure specifically for Generative AI models (e.g., diffusion models, GANs, or large language models) where cost and latency are critical.

  • Proven background in building and operating large-scale data infrastructure (e.g., Ray, Apache Spark) to manage petabytes of multi-modal data (video, audio, text).

  • Expertise in GPU acceleration and deep familiarity with low-level compute programming, including CUDA, NCCL, or similar technologies for efficient inter-GPU communication.

What HeyGen Offers

  • Competitive salary and benefits package.
  • Dynamic and inclusive work environment.
  • Opportunities for professional growth and advancement.
  • Collaborative culture that values innovation and creativity.
  • Access to the latest technologies and tools.

 

HeyGen is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Apply now
HeyGen is hiring a Tech Lead, AI Compute Infrastructure. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
201-500
employees
Founded in
2020
Headquaters
Palo Alto, CA, United States
Country
United States
Industry
Online Media
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

US.svg
United States

Senior Site Reliability Engineer, Storage

Full-time
MLOps / DevOps Engineer
US.svg
United States

Staff Site Reliability Engineer, Storage

Full-time
MLOps / DevOps Engineer
GE.svg
Germany

Network Engineer - High Side Engineering

Full-time
MLOps / DevOps Engineer
US.svg
United States

Offensive Security Engineer, Agent Security

Full-time
MLOps / DevOps Engineer
JP.svg
Japan

Forward Deployed Engineer, Infrastructure Specialist (EMEA & APAC)

Full-time
MLOps / DevOps Engineer
JP.svg
Japan

Senior Support Engineer - Tokyo

Full-time
MLOps / DevOps Engineer
Open Modal