AI Jobs in San Francisco

Find top AI jobs in San Francisco across machine learning, generative AI, and data roles. All opportunities are curated and updated hourly from companies hiring nationwide.

Check out 125 new AI opportunities posted on The Homebase

Researcher, Frontier Cybersecurity Risks

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Researcher for cybersecurity risks, you will design and implement mitigation components for model-enabled cybersecurity misuse that span prevention, monitoring, detection, and enforcement, under the guidance of senior technical and risk leadership. You will integrate safeguards across product surfaces in partnership with product and engineering teams to ensure protections are consistent, low-latency, and scalable with usage and new model capabilities. Additionally, you will evaluate technical trade-offs within the cybersecurity risk domain, propose pragmatic and testable solutions, and collaborate with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and misuse scenarios. You are expected to execute rigorous testing and red-teaming workflows to stress-test the mitigation stack against evolving threats across different product surfaces and iterate based on the findings.

$295,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Lead Software Engineer

New
Top rated
Eloquent AI
Full-time
Full-time
Posted

As a Lead Engineer at Eloquent AI, you will lead the development of AI-powered full-stack applications while overseeing and mentoring other engineers. You will remain hands-on across the stack, take ownership of technical direction, code quality, and delivery standards. Responsibilities include designing and building full-stack applications that power AI-driven workflows for enterprise users, overseeing and reviewing the work of other engineers to ensure high-quality, production-ready code, providing technical guidance, architectural direction, and hands-on support where needed, developing high-performance front-end interfaces for AI agent control, monitoring, and visualization, building scalable backend services that support real-time AI interactions, knowledge retrieval, and automation, working closely with AI researchers and ML engineers to integrate LLMs, RAG, and automation into production-ready systems, establishing engineering best practices across testing, deployment, and performance optimisation, and continuously iterating and refining AI-driven products balancing speed with robustness.

Undisclosed

()

San Francisco, United States
Maybe global
Remote

Computational Protein Design

New
Top rated
Talent Labs
Full-time
Full-time
Posted

Leverage proprietary generative AI models to design proteins for experimental validation by analyzing protein design problems based on functional requirements, biochemistry, structural biology, and sequence homology; generate and optimize designs for experimental validation; coordinate with lab-based protein engineers to plan and optimize the design process and validation strategy. Analyze and leverage experimental results to improve designs and increase success rates over validation rounds; collaborate with machine learning scientists to fine-tune and prompt models. Act as an effective interface between machine learning model development and experimental validation; capture bioengineering learnings and feedback to the machine learning unit and vice versa; foster a collaborative and innovative environment by proactively finding opportunities to innovate and create clarity and alignment between different units. Contribute to computational tools by helping improve the use, service, and integration of AI models through feedback to software engineers and the foundational machine learning unit; assist in improving data management systems and workflows. Maintain the highest scientific standards with publication-grade work; stay current on developments in synthetic biology; continue building understanding of generative AI and expanded areas of protein and cell biology; participate in knowledge sharing through organizing and presenting at internal reading groups; attend and present at conferences when relevant.

Undisclosed

()

San Francisco, United States
Maybe global
Remote

ML Scientist

New
Top rated
Sesame
Full-time
Full-time
Posted

Contribute to the development of ML models across multiple modalities. Work across the ML stack including model architectures, data curation, model evaluation, training and inference infrastructure, research, and experimentation. Select promising approaches from the literature to pursue and create new approaches where necessary to achieve unique goals.

$190,000 – $320,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Staff Strategic Sourcing Manager (Hardware)

New
Top rated
Together AI
Full-time
Full-time
Posted

Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate reinforcement learning (RL) and post-training pipelines to jointly optimize algorithms and systems where most of the cost is inference. Make RL and post-training workloads more efficient with inference-aware training loops such as asynchronous RL rollouts and speculative decoding. Use these pipelines to train, evaluate, and iterate on frontier models on top of the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks across the training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, and feed insights back into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services under real production workloads. Drive roadmap items requiring engine modification including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks for rigorous validation of improvements. Provide technical leadership by setting technical direction for cross-team efforts, and mentor engineers and researchers on full-stack ML systems work and performance engineering.

$200,000 – $280,000
Undisclosed
YEAR

(USD)

San Francisco
Maybe global
Onsite

Data Scientist, Integrity Measurement

New
Top rated
OpenAI
Full-time
Full-time
Posted

The data scientist will own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals. They will develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, build metrics suitable for goaling or A/B tests when prevalence or other top line metrics are not appropriate, and own dashboards and metrics reporting for harm verticals. They will conduct analyses and generate insights to inform improvements to review, detection, or enforcement, and influence safety roadmaps. The role involves optimizing LLM prompts for measurement purposes, collaborating with other safety teams to understand key safety concerns and create relevant policies, providing metrics for leadership and external reporting, and developing automation to scale their work using agentic products. The position may involve resolving urgent escalations outside normal work hours and may require working with sensitive content including sexual, violent, or otherwise disturbing material.

$293,000 – $385,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Principal Engineer, AI Model LifeCycle

New
Top rated
Crusoe
Full-time
Full-time
Posted

The Principal Software Engineer for the Model LifeCycle team is responsible for managing fine-tuning systems for large foundation models, including multi-node orchestration, checkpointing, failure recovery, and cost-efficient scaling. They implement and maintain end-to-end training pipelines for Large Language Models, distillation and reinforcement learning pipelines, and agent execution infrastructure. Additionally, they manage dataset, model, and experiment management including versioning, lineage, evaluation, and reproducible fine-tuning at scale. The role involves close collaboration with product, business, and platform teams to shape core abstractions and APIs, influence long-term architectural decisions around training runtimes, scheduling, storage, and model lifecycle management, and contribute to the open-source LLM ecosystem. This role offers significant ownership in designing and building core systems from first principles.

$260,000 – $326,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Design Director

New
Top rated
Tenstorrent
Full-time
Full-time
Posted

As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify CPU and AI technologies for next-generation automotive applications. This role involves shaping the architectural direction of the automotive and robotics portfolio to ensure products meet the industry's high standards for performance, safety, reliability, and security. The position requires strong technical leadership, systems thinking, and cross-functional collaboration to deliver world-class automotive solutions.

$100,000 – $500,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote

Machine Learning Engineer, TTS Systems

New
Top rated
Bland
Full-time
Full-time
Posted

As an ML Engineer focused on Text To Speech (TTS), you will own the deployment, optimization, and maintenance of production TTS systems. Responsibilities include deploying and optimizing large-scale TTS models into production environments for reliable, low-latency inference; implementing and refining post-training and modern inference techniques to maximize throughput and audio quality; collaborating with cross-functional teams to ensure seamless rollout, A/B testing, and iterative improvement of production models; maintaining high availability and scalable infrastructure for multi-speaker, expressive, and controllable TTS use cases; and designing and documenting best practices for efficient TTS inference and system reliability.

$160,000 – $250,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid

Machine Learning Researcher, Audio

New
Top rated
Bland
Full-time
Full-time
Posted

As a Machine Learning Researcher at Bland, your responsibilities include building and scaling next-generation text-to-speech (TTS) systems by designing and training large scale models capable of expressive, controllable, and human-sounding output, developing neural audio codec-based TTS architectures for efficient and high-fidelity generation, improving prosody modeling, question inflection, emotional expression, and multi-speaker robustness, and optimizing for real-time, low-latency inference in production. You will advance speech-to-text modeling by building and fine-tuning large scale ASR systems robust to accents, noise, telephony artifacts, and code switching, leveraging self-supervised pretraining and large-scale weak supervision, and improving transcription accuracy for real-world enterprise scenarios including structured extraction and conversational nuance. You will pioneer neural audio codecs by researching and implementing neural audio codecs that achieve extreme compression with minimal perceptual loss, exploring discrete and continuous latent representations for scalable speech modeling, and designing codec architectures that enable downstream generative modeling and controllable synthesis. Additionally, you will develop scalable training pipelines by curating and processing massive audio datasets across languages, speakers, and environments, designing staged training curricula and data filtering strategies, and scaling training across distributed GPU clusters focusing on cost, throughput, and reliability. You will run rigorous experiments by designing ablation studies to isolate the impact of architectural changes, measuring improvements using both objective metrics and perceptual evaluations, and validating ideas quickly through focused experiments that confirm or eliminate hypotheses.

$160,000 – $250,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What types of AI jobs are available in San Francisco?","answer":"San Francisco offers diverse AI career paths across startups and established tech firms. Common roles include Machine Learning Engineers building algorithms, AI Engineers developing models and infrastructure, and Lead AI/DevOps Engineers managing deployment pipelines. You'll also find specialized positions like AI Training Specialists working with data annotation, Senior People Partners in R&D teams, and Lead Product Designers focused on AI-powered user experiences. The Bay Area stands out with 42% of tech postings being AI-related, representing a significant increase from just 20% in mid-2022. This surge aligns with San Francisco capturing approximately 50% of global AI funding."},{"question":"Are there remote or hybrid AI jobs available in San Francisco?","answer":"San Francisco does offer remote and hybrid AI positions, though recent trends show a shift toward office returns. Remote tech job postings have decreased to 10% in the Bay Area, down from 24% in mid-2022, indicating companies are increasingly valuing in-person collaboration for AI development. This office return coincides with the AI industry surge, as companies set up physical spaces to foster innovation. Many listings explicitly mention hybrid arrangements, giving engineers flexibility while maintaining team cohesion. The trend toward office work is further evidenced by strong AI-driven office leasing activity, with 2.8 million square feet of demand expected to reduce vacancy rates by 2025."},{"question":"What skills are most in demand for AI jobs in San Francisco?","answer":"San Francisco employers prioritize a blend of technical expertise and applied AI capabilities. Python programming tops the requirements list, alongside machine learning frameworks and practical experience building AI systems. Specialized skills in data analytics, cloud infrastructure, and A/B testing methodology are frequently requested. Fintech knowledge proves valuable across financial AI applications, while statistical metrics analysis helps quantify model performance. Robotics experience appeals to automation-focused companies. Beyond technical abilities, employers value software design principles and cross-functional collaboration skills to implement AI at scale. Dashboarding capabilities demonstrate your ability to visualize AI insights for stakeholders across technical and business teams."},{"question":"What is the salary range for AI jobs in San Francisco?","answer":"AI salaries in San Francisco reflect the region's competitive tech market and high cost of living. Mid-level AI designers can expect $160K-$200K annually, while senior AI/ML solutions roles command $140K-$277K. Senior Machine Learning Engineers earn premium compensation in the $200K-$290K range. Several factors influence these figures, including specialized expertise in generative AI or automation, company size and funding stage, and whether the position involves team leadership. Venture-backed AI startups like OpenAI and Anthropic (each with over $1B in funding) often offer competitive packages to attract top talent. Experience level creates significant salary differentiation, with senior positions receiving substantially higher compensation."},{"question":"What experience levels are companies hiring for AI jobs in San Francisco?","answer":"San Francisco AI hiring primarily targets mid-to-senior professionals who can immediately contribute to complex projects. Lead and Senior Machine Learning Engineer positions dominate listings, reflecting the industry's maturity and specialized needs. Companies seek candidates who can deploy AI at scale, mentor junior team members, and collaborate across engineering, product, and business functions. While entry-level positions exist, particularly at larger organizations and for AI Training Specialists, the competitive landscape favors experienced practitioners. Startups with substantial funding like OpenAI and Anthropic particularly value experienced AI talent who can navigate cutting-edge challenges in generative AI, reinforcement learning, and responsible AI deployment."},{"question":"How often are new AI jobs posted in San Francisco?","answer":"San Francisco maintains an exceptionally high AI job posting volume, with Q1 2024 data showing 49.3 AI jobs per 100,000 residents—among the highest per-capita rates nationally. The city currently lists over 6,500 AI positions on major job boards, representing about 7.5% of all San Francisco job listings. This momentum shows no signs of slowing, with projections indicating sustained growth through 2025-2026. The frequency reflects San Francisco's position as the epicenter of AI development, capturing approximately half of global AI funding. New opportunities emerge daily across startups, established tech companies, and industries adopting AI, creating a dynamic job market for machine learning professionals."},{"question":"What is the difference between The Homebase and other job boards?","answer":"The Homebase specializes in curating quality AI positions tailored to San Francisco's unique ecosystem, unlike general boards that list thousands of unfiltered results. While platforms like Indeed offer 6,500+ AI listings including tangential roles like AI Training Operators, The Homebase focuses exclusively on core technical positions requiring substantial AI expertise. Our platform provides granular filtering by skills (Python, machine learning, generative AI), experience level, and compensation ranges specific to Bay Area standards. We emphasize transparency with detailed salary information for senior roles ($140K-$290K) and highlight positions at well-funded AI startups like OpenAI and Anthropic that might get lost on broader platforms."}]