Product Security Engineer – Multimodal & Generative AI

US.svg
United States
Location
Palo Alto United States
Palo Alto United States
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
-
Date posted
August 15, 2025
Job type
Full-time
Experience level
Mid level

Job Description

About Luma Labs

At Luma Labs, we’re pioneering the next generation of multimodal generative AI, enabling models to create hyper-realistic videos and images from natural language and other rich input modalities. Our products empower creators, developers, and companies to generate content that was previously impossible instantly and intelligently.

As we scale our AI platform and reach millions of users, we are hiring our Product Security Engineer to set the foundation for security across everything we build. This is a critical role that blends hands-on security engineering with strategic leadership ideal for someone who thrives in fast-paced, high-impact environments and wants to shape security from day one.

Role Overview

You will be Luma Labs’ first dedicated security engineering hire. As the Product Security Engineer, you’ll own the security posture of our products, services, and generative systems. You’ll work directly with engineering, ML, infrastructure, and leadership to proactively design and implement secure systems with a strong focus on the unique risks and opportunities in multimodal video and image generation.

This is a leadership-track position with both strategic ownership and deep technical execution.

What You’ll Do

  • Own Product & Application Security: Define and drive Luma’s approach to secure product development from design reviews to automated scanning to runtime protections.

  • Secure GenAI Systems: Analyze and secure the full lifecycle of generative models (image, video, multimodal), including data ingestion, model inference, and API surface.

  • Lead Threat Modeling & Reviews: Run deep security reviews on new features, architectures, and model capabilities, with a focus on abuse prevention, data leakage, and content safety.

  • Build Security Infrastructure: Stand up tools and systems for static analysis, dependency scanning, secrets detection, and CI/CD hardening.

  • Define Misuse & Abuse Guardrails: Partner with ML and product teams to mitigate prompt injection, jailbreaks, adversarial inputs, and misuse of generative outputs.

  • Incident Response & Detection: Lead investigations and forensics for product-related security incidents, vulnerabilities, or model abuse cases.

  • Influence Org-wide Security Culture: Establish best practices, run internal training, and serve as a go-to security expert across Luma’s growing technical teams.

  • Build the Function: Help hire and grow a high-caliber security team as the company scales.

Requirements:

Must-Have:

  • 5+ years in security engineering, with deep experience in product/application security.

  • Have successful track of getting product through security certifications

  • Proven ability to operate as a hands-on engineer and technical leader.

  • Strong understanding of generative AI systems or high-complexity ML applications.

  • Proficient in secure development with Python and experience securing cloud-native environments (AWS/GCP, Docker/K8s).

  • Deep experience with threat modeling, secure design, and modern application security tooling (SAST, DAST, IaC scanning, etc.).

  • Ability to balance pragmatism and rigor you can make fast, thoughtful decisions and execute in a fast-moving startup environment.

  • Excellent written and verbal communication skills; comfortable collaborating across research, product, infra, and leadership.

Bonus / Nice-to-Have:

  • Hands-on experience with generative models (e.g., diffusion, transformers, vision-language) and related risks (e.g., prompt injection, data leakage).

  • Experience building or leading security teams in an early-stage startup.

  • Exposure to red teaming, adversarial ML, or AI safety frameworks.

  • Public speaking, open-source contributions, or research in security or AI fields.

Why This Role is Unique

  • Greenfield Security: You’ll be defining the security architecture of one of the most advanced generative AI stacks in the world from the ground up.

  • Cross-Disciplinary Impact: Collaborate directly with ML researchers, creative technologists, infra engineers, and designers.

  • Fast Path to Leadership: This is a founding role with direct access to leadership and influence over future hires and security roadmap.

Deep Tech with Real Users: Work on cutting-edge video and image generation tools already in production and scaling fast.

Apply now
Luma AI is hiring a Product Security Engineer – Multimodal & Generative AI. Apply through Homebase and and make the next move in your career!
Apply now
Companies size
201-500
employees
Founded in
2021
Headquaters
San Francisco, CA, United States
Country
United States
Industry
Software Development
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

US.svg
United States

Product Security Engineer – Multimodal & Generative AI

Full-time
MLOps / DevOps Engineer
US.svg
United States

Engineering Manager - AI Reliability

Full-time
MLOps / DevOps Engineer
US.svg
United States

Electrical Engineer: AI Hardware

MLOps / DevOps Engineer
US.svg
United States

Senior Manager - Integrations Specialist

Full-time
MLOps / DevOps Engineer
US.svg
United States

DevOps Automation Engineer

Full-time
MLOps / DevOps Engineer
US.svg
United States

Founding DevOps Engineer

Full-time
MLOps / DevOps Engineer
Open Modal