HPC Engineer - Research Infrastructure

US.svg
United States
Location
Palo Alto United States
Palo Alto United States
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
-
Date posted
January 28, 2025
Job type
Full-time
Experience level
Mid level

Job Description

Help Luma build some of the biggest & fastest AI supercomputing clusters in the world! As a High-Performance Computing engineer, you’ll work at the intersection of hardware and software, designing systems that deliver the maximum possible performance for running large-scale AI models. We work at the very cutting edge of speed and scale, combining the traditions of High-Performance Computing (HPC) in a modern cloud environment. 


For this role, it’s important you understand how to combine CPU’s, GPU’s, and network devices into systems that are then deployed at a large scale to peak efficiency. You understand the lowest levels of the software platforms that sit on top of this hardware, including how to best optimize the Linux kernel and user-space code. You are capable of writing code to automate the monitoring and healing of these systems, commanding a large number of servers with few people.

Responsibilities

  • In this role, you will work closely with and directly accelerate machine learning researchers, but don't need to be a machine learning expert yourself. 

  • We value people who can quickly obtain a deep technical understanding of new domains and enjoy being self-directed and identifying the most important problems to solve. 

  • You’ll be managing training HPC clusters at Luma from provisioning to performance tuning.

  • Areas of work will include observability, distributed job tracing, GPU diagnostics, software environment management and additional tooling plus work on the actual code to enable necessary features.

  • We believe that increasing compute is a huge lever to AI progress. You will have a direct impact on our ability to grow to an unprecedented scale and likewise produce unprecedented results.

Experience

  • 8+ years experience as infrastructure engineer or Devops in large and complex distributed systems.

  • Deep understanding of networking, bonus points for experience in HPC networking.

  • Experience developing high-quality software in a general-purpose programming language, preferably including Python.

  • Excellent problem-solving skills and attention to detail.

  • Experience with GPUs in large scale clusters is strongly preferred.

  • Strong knowledge of observability and monitoring in distributed systems.

  • Tenacious at troubleshooting hardware and network topology failures in distributed systemsIndependently driven and able to own problems and build solutions from end-to-end.

  • Experience with large scale data center operations, proficiency in cloud orchestration and system tools.

Your application is reviewed by real people.

Apply now
Luma AI is hiring a HPC Engineer - Research Infrastructure. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
201-500
employees
Founded in
2021
Headquaters
San Francisco, CA, United States
Country
United States
Industry
Software Development
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

US.svg
United States

Product Security Engineer – Multimodal & Generative AI

Full-time
MLOps / DevOps Engineer
US.svg
United States

Engineering Manager - AI Reliability

Full-time
MLOps / DevOps Engineer
US.svg
United States

Electrical Engineer: AI Hardware

MLOps / DevOps Engineer
US.svg
United States

Senior Manager - Integrations Specialist

Full-time
MLOps / DevOps Engineer
US.svg
United States

DevOps Automation Engineer

Full-time
MLOps / DevOps Engineer
US.svg
United States

Founding DevOps Engineer

Full-time
MLOps / DevOps Engineer
Open Modal