Find AI Work That Works for You
Latest roles in AI and machine learning, reviewed by real humans for quality and clarity.
I'm strong in:
Edit filters
Latest AI Jobs
Showing 61 – 79 of 79 jobs
Tag
Senior Data Analyst
Worth AI
11-50
-
United States
Full-time
Remote
true
Worth AI, a forward-thinking company in the computer software industry, is looking for a detail-oriented and analytical individual to join their team as a Senior Data Analyst. At Worth AI, we strive to drive impactful decision-making through insights derived from data, while upholding values of one team, extreme ownership and creating raving fans both internally and for our customers.As a Senior Data Analyst within our fintech organization, you will play a critical role in analyzing and optimizing the data that drives credit decisioning, underwriting, and fraud prevention models. You will work closely with Finance, Risk, Product, and Engineering teams to mine large and complex datasets, build dashboards, write business-critical queries, and help steer the company’s AI-driven underwriting strategy. This role requires both business acumen and technical proficiency, including a solid understanding of data engineering principles and the ability to deliver scalable analytics solutions in a regulated environment.Responsibilities Mine and analyze large-scale credit, transactional, and behavioral datasets to identify trends, anomalies, and insights that inform credit risk, underwriting, fraud, and pricing strategies. Write performant SQL queries to extract, transform, and analyze data across structured and semi-structured sources (including loan-level, customer, and bureau data). Design, build, and maintain interactive dashboards and reporting tools to support Finance, Risk, Credit, and Executive teams in monitoring key performance indicators. Partner with Data, Finance, Product and Engineering teams to define and track key metrics for credit performance, loss forecasting, and portfolio health. Support the development and ongoing evaluation of AI/ML models by providing exploratory data analysis, historical performance context, and business rule validation. Collaborate with data engineering and data science teams to ensure pipelines are reliable, secure, and meet both operational and regulatory standards. Automate recurring reports and analyses to improve visibility and reduce manual workflows across the credit and finance lifecycle. Contribute to model governance efforts by ensuring documentation, auditability, and transparency of data inputs used in risk models and decision engines. Stay informed on regulatory and compliance requirements related to data usage in financial services (e.g., Fair Lending, ECOA, GLBA) and align analytics practices accordingly. Requirements Bachelor's degree in Data Science, Statistics, Finance, Computer Science, or a related technical field. 4+ years of experience in a data analytics or business intelligence role, ideally in a fintech, financial services, or credit risk environment. Advanced SQL skills and experience working with cloud data platforms (e.g., AWS Redshift, Athena, Snowflake, BigQuery). Strong experience with data visualization tools like Tableau, Power BI, or Looker, and the ability to build intuitive dashboards for business stakeholders. Solid understanding of credit risk concepts, underwriting processes, and key lending KPIs such as delinquency, charge-off rates, loss curves, and approval funnels. Exposure to AI/ML-based decision systems, especially in underwriting, fraud detection, or credit scoring contexts. Working knowledge of data engineering fundamentals and modern data stack tools (e.g., dbt, Airflow, ETL pipelines). Proficiency in Excel for detailed analysis and cross-functional reporting. Bonus: Experience with Python or other scripting languages for data analysis and automation. Excellent communication and problem-solving skills, with the ability to present complex data insights to technical and non-technical stakeholders alike. High standards for data integrity, governance, and regulatory compliance in a financial environment. Benefits Health Care Plan (Medical, Dental & Vision) Retirement Plan (401k, IRA) Life Insurance Unlimited Paid Time Off 9 paid Holidays Family Leave Work From Home Free Food & Snacks (Access to Industrious Co-working Membership!) Wellness Resources
Data Analyst
Data Science & Analytics
Apply
August 14, 2025
Senior Software Engineer, Core Platform
Casca
11-50
USD
0
185000
-
225000
United States
Full-time
Remote
false
Why Casca?Casca is building AGI for banking. We’re replacing decades-old legacy systems with AI-native technology that automates 90% of the manual work humans once had to do. Role OverviewWe're seeking a Senior Software Engineer to spearhead our Core Platform function. In this high-leverage role, you'll design and scale the foundational systems that power our AI-driven lending platform. You'll build scalable, secure, and reliable infrastructure that handles sensitive financial data, integrates AI models for loan processing, and supports high-availability deployments across cloud environments. You thrive on cross-functional collaboration with engineering, product, and compliance teams to deliver velocity, reliability, and compliance in a competitive fintech landscape.What you'll do:Architect and enhance platform features including authentication, auditability, and enterprise integrations across the banking ecosystem.Build and maintain high-performance, developer-friendly infrastructure using containerization (Docker), orchestration (Kubernetes, Helm), and cloud-native services across AWS, Azure, and GCP.Improve reliability, scalability, and compliance for platform services like permissions management, admin reporting, and customizationsDevelop and optimize infrastructure as code (AWS CDK, Terraform) and CI/CD pipelines to automate deployments, ensuring high availability, disaster recovery, and efficient resource utilization.Collaborate with AI engineers to benchmark & integrate models into the core platform.Mentor junior engineers, conduct code reviews, and contribute to best practices for building systems that handle high-volume, mission-critical financial workflows.Create and maintain comprehensive documentation for architecture, runbooks, and operational procedures.Stay ahead of emerging technologies in cloud-native, fintech, and AI infrastructure to drive innovation in our platform.What you'll bring:Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience)5+ years of software development experience, particularly in scalable backend and infrastructure systems (e.g., Rust, Node.js, Python, or Go), and modern deployment technologies like Docker, Kubernetes, and Helm.Strong experience with major cloud platforms (AWS, Azure, GCP) and their managed services for compute, storage, and networking.Proficiency in infrastructure as code tools (e.g., AWS CDK, Terraform, Pulumi) and CI/CD automation (e.g., GitHub Actions, Jenkins).Experience with building observability tooling including metrics collection, distributed tracing, logging systems, and monitoring dashboards.Experience with enterprise software features like permissions (RBAC), analytics, audit logs, notifications, self-serve onboarding, and provisioning.Proven ability to mentor and guide technical teams.What you'll get: Impact & Ownership: A unique opportunity to shape the future of banking through AI, owning end-to-end product initiatives.Collaborative Environment: Work alongside a talented and passionate team that values continuous improvement and knowledge sharing.Competitive Compensation: Includes salary, benefits, and potential equity in a fast-growing startup.Professional Growth: Access to resources and mentorship to expand your skill set, influence strategy, and accelerate your career.Culture of Innovation: We encourage risk-taking, learning from failures, and pushing the boundaries of what’s possible in fintech.As an early-stage company building at the frontier of AI, we work with high intensity and commitment. While schedules can vary by role/team, many weeks will demand extra focus, flexibility and time particularly during major launches and high impact sprints. We're seeking those who are aligned to and able to commit to that expectation which includes 5 days per week in our San Francisco Office.
Software Engineer
Software Engineering
Apply
August 14, 2025
Product Security Engineer
Casca
11-50
USD
150000
-
215000
United States
Full-time
Remote
false
Why Casca?Casca is building AGI for banking. We’re replacing decades-old legacy systems with AI-native technology that automates 90% of the manual work humans once had to do. Role OverviewWe're seeking a Product Security Engineer to join our team in securing our AI-driven lending platform. In this critical role, you'll collaborate closely with engineering, product, and compliance teams to embed security into every stage of development, ensuring our platform remains protected in a competitive fintech landscape.What you'll do:Build secure-by-default libraries and tools that make the secure path the easiest and most attractive choice for developers and their AI agentsPartner closely with engineering teams to incorporate secure design principles at every stage of developmentReview security-critical code and own key parts of the product, including authentication and access controlContribute meaningfully to the Casca code baseAudit the existing codebase for vulnerabilitiesImprove our static analysis and vulnerability management toolingDiscover vulnerabilities through red team exercisesParticipate in incident responseWhat you'll bring:2+ years of experience in product security, application security, offensive security, and/or security-focused software engineeringProven ability to identify software vulnerabilities, demonstrated through CVEs, bug bounty awards, blog posts, or prior work experienceStrong expertise in web application securityStrong communication and collaboration skills, particularly with engineering teamsBonus points:Open source contributionsExperience red teaming LLMs and AI-native applicationsExperience managing cloud environments (e.g. Azure, GCP, AWS)Experience working at or with a small company or a hyper-growth startupWhat you'll get: Impact & Ownership: A unique opportunity to shape the future of banking through AI, owning end-to-end product initiatives.Collaborative Environment: Work alongside a talented and passionate team that values continuous improvement and knowledge sharing.Competitive Compensation: Includes salary, benefits, and potential equity in a fast-growing startup.Professional Growth: Access to resources and mentorship to expand your skill set, influence strategy, and accelerate your career.Culture of Innovation: We encourage risk-taking, learning from failures, and pushing the boundaries of what’s possible in fintech.As an early-stage company building at the frontier of AI, we work with high intensity and commitment. While schedules can vary by role/team, many weeks will demand extra focus, flexibility and time particularly during major launches and high impact sprints. We're seeking those who are aligned to and able to commit to that expectation which includes 5 days per week in our San Francisco Office.
Software Engineer
Software Engineering
Apply
August 14, 2025
Software Engineer Manager
Union
51-100
USD
175000
-
230000
United States
Full-time
Remote
false
About UsAt Union, we are solving one of the hardest challenges in AI infrastructure today: enabling high-velocity iteration while maintaining seamless production-readiness for AI workloads at scale. Flyte, the open-source project we steward, is the emerging standard for modern data and AI orchestration, with numerous leading technology organizations - like LinkedIn, Spotify, and Gojek - running millions of mission-critical workloads on the platform. We have a deep bench of infrastructure veterans from companies in the Big Three and beyond and a technical founding team who originally created Flyte while at Lyft. The Opportunity
Reporting into the Head of Engineering, we are currently seeking a highly technical, versatile Distributed Systems Engineer with 10+ years of professional experience building, designing and implementing services and solutions to streamline delivery, installation, and orchestration of data and services in a large-scale AI/ML platform based on the Flyte orchestration framework. Successful candidates will have a broad understanding of multiple cloud vendors, Kubernetes, API design, high-volume, low-latency systems, and thrive in a fast paced environment. We value individuals who enjoy tackling challenges head-on, can communicate effectively across technical and non-technical teams, have a knack for creative problem-solving, and can balance short-term priorities with long-term goals—both as a hands-on technical partner across the organization and as a leader building a high-performing engineering team.In this role, you will:Design and build distributed systems backend services (APIs, Kubernetes controllers, etc) and client components to install, manage, and observe Union services in a Kubernetes native environment.Lead, mentor, and foster the professional growth of a high-performing, collaborative engineering team through effective coaching and guidance.Design, implement, and optimize distribution strategies to facilitate simple and intuitive management of a complex platform in customer controlled environments.Work across multiple cloud vendors including AWS, GCP, Azure, and OCI as well as neo-cloud providers.Develop and maintain services and tooling to make our systems more reliable, secure, and performant.Contribute to architectural decisions and participate in code and design reviews across various teams, ensuring the highest standards of quality and performance.Work closely with broader teams including Backend, Frontend, and Support to improve the experience for our customers.Frontend expertise is a bonus.You will be expected to be in-office.About you:Have 10+ years of experience in deeply technical roles in engineering functions.3-4 years of professional experience leading, managing, growing and coaching a team of engineers.Have a deep passion for all things Kubernetes and the broader container orchestration ecosystem.Can navigate and pick up new technologies quickly.Always think about the big picture and can put yourself in the shoes of the developer and customer.You have hands-on experience with backend programming languages (Go, Rust, Python).Can own complex projects from planning to completion.Bonus: You have a general understanding of building modern web applications using Next.js, React, and Typescript.You can expect to work with the following tools at Union, however, we’re constantly evolving our stack!Languages: Golang, Rust, PythonInfrastructure: AWS, GCP, Azure, OCI, KubernetesCI/CD: Buildkite, ArgoCD, Terraform, HelmBenefits & BelongingAt Union.ai we know that employees who feel their best can build amazing things and we are proud to offer best in class benefits that will continually evolve and grow as the needs of our employees do. Benefits may vary based on countryExcellent medical - We pay 100% of your premiums and 90% for your dependentsGenerous dental and vision plans- We pay 90% of the premiums for you and your dependentsMeaningful equity in the form of options – all employees are owners hereUnlimited time off + 12 company holidays 401K match - Union.ai matches 100% of contributions up to the first 3%, and 50% up to 5%16 weeks paid parental leave for primary and secondary caregiversFlexible work schedule (some restrictions apply)For in office employees: Lunch provided onsite and well stocked kitchen with snacks and drinks. We believe that our differences are what bring us together to achieve truly special outcomes. We strive to be inclusive and focus on building teams that embody that quality too. Union.ai is an equal-opportunity employer and we encourage you to apply, even if your experience doesn’t align exactly with our job description.
Software Engineer
Software Engineering
Apply
August 14, 2025
Staff Product Manager - Vendor Risk Management
Vanta
1001-5000
USD
0
221000
-
260000
United States
Full-time
Remote
true
At Vanta, our mission is to help businesses earn and prove trust. We believe that security should be monitored and verified continuously, and we empower companies to practice better security and prove it with ease. Vanta has a kind and talented team, and while some have prior security experience, many have been successful at Vanta without it. The Vanta Vendor Risk team is developing a next-generation, AI-powered vendor risk management solution that enables larger organizations to effectively evaluate and manage the security and risk associated with third-party suppliers.We’re seeking a strategic Staff Product Manager to drive the development of innovative solutions that empower customers to assess and manage their suppliers' security, compliance, and risk. In this role, you will lead the evolution of our VRM core product, focused on building durable, enterprise-grade workflows and automations that streamline the risk review process for growing enterprises.You’ll join a small but growing team of PMs, playing a critical role in shaping both Vanta’s product strategy and our product team’s culture. If you’re passionate about building impactful, customer-focused products and thrive in a high-growth environment, we’d love to connect with you!What we value most is a deep commitment to delivering value, curiosity, and a drive for building solutions that resonate with customers. In this role, you’ll work closely with engineering, design, and cross-functional stakeholders to set a differentiated roadmap that maximizes customer impact and strengthens Vanta’s market position.What you’ll do as a Staff Product Manager at Vanta:In this role, you will be at the forefront of Vanta’s product strategy, delivering innovative solutions and expanding VRM’s capabilities to meet customer needs for integrated, automated, and customized workflows. Key responsibilities include:Define and Execute Product Strategy: Develop and implement a strategy and roadmap for the VRM team, prioritizing features that address core customer needs and allow for scalable, flexible, and automated workflow.Customer-Focused Discovery: Lead research initiatives to understand the challenges our customers face in vendor risk management. Gather insights from direct outreach and build solutions that address their most pressing issues.Enterprise Readiness: Define and deliver Vanta’s vision for scaling VRM to the largest enterprises in the world and building a next generation TPRM product. Oversee the full product lifecycle, from initial ideation through launch. Balance immediate needs with long-term strategic goals and navigate complex trade-offs.Collaborate Cross-Functionally: Partner with Engineering, Design, and GTM teams to ensure solutions are valuable, feasible, and user-friendly.Market Expansion and AI Strategy: Explore and integrate relevant data sources and AI/ML capabilities to enable more proactive, automated risk management.Design durable product systems (object models, workflows) that simplify complexity and scale across enterprise programs.Drive execution and trade-off decisions with conviction, ensuring progress even in high-ambiguity environments.How to be successful in this role:10+ Years in Product Management: Proven experience leading product strategy and managing teams in high-growth, B2B SaaS environments.Customer Empathy and Discovery Skills: A deeply customer-centric approach, with a proactive attitude toward understanding user needs and market gaps.Framework and Systems thinking: Strong ability to navigate ambiguity and make structured decisions to balance complex trade-offs. Experience with building 0-1 products and scaling them into new greenfield areas.Project Execution and Prioritization: Strong decision-maker with a bias to action, comfortable driving tough trade-offs in ambiguous, high-stakes environments.Interest in Security and AI/ML: Familiarity with or enthusiasm for the security space and a desire to explore how AI/ML can power vendor risk management solutions.Join us to lead the next phase of Vanta’s Vendor Risk Management product and make a meaningful impact on how companies secure their data and grow with confidence.What you can expect as a Vantan:Industry-competitive compensation100% covered medical, dental, and vision benefits with dependents coverage16 weeks fully-paid parental Leave for all new parentsHealth & wellness and remote workplace stipendsFamily planning benefits through Carrot Fertility401(k) matchingFlexible work hours and locationOpen PTO policy11 paid holidays in the USOffices in SF, NYC, London, Dublin, and SydneyTo provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar-stage growth companies. Final offer amounts are determined by multiple factors and may vary based on candidate location, skills, depth of work experience, and relevant licenses/credentials. #LI-remoteAt Vanta, we are committed to hiring diverse talent of different backgrounds and as such, it is important to us to provide an inclusive work environment for all. We do not discriminate on the basis of race, gender identity, age, religion, sexual orientation, veteran or disability status, or any other protected class. As an equal opportunity employer, we encourage and welcome people of all backgrounds to apply.About VantaWe started in 2018, in the wake of several high-profile data breaches. Online security was only becoming more important, but we knew firsthand how hard it could be for fast-growing companies to invest the time and manpower it takes to build a solid security foundation. Vanta was inspired by a vision to restore trust in internet businesses by enabling companies to improve and prove their security. From our early days automating security monitoring for compliance standards like SOC 2, HIPAA and ISO 27001 to creating the world's leading Trust Management Platform, our vision remains unchanged. Now more than ever, making security continuous—not just a point-in-time check— is essential. Thousands of companies rely on Vanta to build, maintain and demonstrate their trust— all in a way that's real-time and transparent.
Product Manager
Product & Operations
Apply
August 14, 2025
Senior Director, Marketer Sales
Firsthand
101-200
-
United States
Full-time
Remote
false
About FirsthandFirsthand has built the first AI-powered Brand Agent platform, transforming the way marketers and publishers engage consumers through their own AI agents, anywhere online.While most AI applications in marketing and advertising focus on back-office automation, the Firsthand Brand Agent Platform™ powers front-line consumer engagement. Operating across both owned properties and paid media, Firsthand's Brand Agents make a company’s expertise accessible in real time, adapting to consumers’ interests and guiding them towards the information they need to take action. Central to the platform is Lakebed™, the company’s AI-first data and knowledge rights management system that ensures brands retain full ownership and control of their expertise.Firsthand is led by Jon Heller, Michael Rubenstein, and Wei Wei, whose previous ventures helped build the foundations of modern digital advertising. Backed by Radical Ventures, FirstMark Capital, Aperiam Ventures, and Crossbeam Venture Partners, Firsthand is shaping the future of AI-driven consumer engagement.Firsthand is headquartered in NYC, with team members working together in-office three days a week.Firsthand is looking to hire a Senior Director, Marketer Sales to drive adoption of Brand Agents with senior marketers at the world’s leading brands.In this pivotal role, you'll educate, evangelize, and drive adoption of a transformative new marketing model: “agents, not ads.” You’ll lead strategic sales to brand-side marketing executives, serve as a consultative partner across innovation and media teams, and help fully commercialize Firsthand’s marketer offering. You'll also collaborate across Firsthand’s buyside motions, supporting publisher campaigns and identifying strategic agency partnerships.You are a seasoned commercial leader with a strong grasp of marketing, media, and technology. You bring experience in ad networks or programmatic, the ability to navigate complex organizations, and the entrepreneurial mindset to shape the offering as you sell it. You're comfortable engaging both business and technical stakeholders and thrive in early-stage environments where the path isn't fully paved.The ideal candidate will be located in the New York metropolitan area.
Key ResponsibilitiesLead strategic marketer engagements: Act as the point person for existing marketer-direct relationships, advancing relationships and identifying opportunities to deepen impact.Drive the direct-to-marketer motion: Engage CMOs, Heads of Innovation, and digital marketing leaders to position Brand Agents as a strategic lever for long-term brand and consumer engagement.Develop the marketer offering: Help define and operationalize Firsthand’s holistic offering for marketers, contributing to early commercialization strategy, vertical prioritization, product feedback, and business model development (pricing, contracts, implementation).Lead strategic pitches: Own and lead marketer-focused pitches to build the 2026 pipeline.Collaborate across publisher and agency motions: Partner with internal teams to identify leverage points across publisher demand and emerging agency relationships.Run a disciplined pipeline: Forecast accurately, manage multi-stakeholder deals with rigor, and close high-value opportunities.Act as a strategic partner: Provide structured insights to product and leadership teams to inform positioning, vertical strategy, and GTM readiness.Qualifications10+ years of sales experience, ideally across martech, adtech, or mediaTrack record of success selling to brand-side marketers at large, matrixed organizationsExperience working in or closely with ad networks and programmatic platforms; understanding of agency dynamics a plusAbility to engage both technical and marketing stakeholders, including product, data, and legal counterpartsComfortable in early-stage, pioneering environments; you know how to shape a product while selling itStrategic, scrappy, and adaptive, with strong communication skills and executive presenceEntrepreneurial mindset: excited to help build and refine the offering—not just run the playbookHow to ApplyIf you are ready to embark on an exhilarating journey at the forefront of AI, seize this incredible opportunity and apply here. We eagerly anticipate hearing from you!Note: Compensation and equity will be market-competitive for well-capitalized, early stage startups and will be discussed during the interview process.
Enterprise Sales
Marketing & Sales
Apply
August 14, 2025
Product Manager, AI Products
PathAI
201-500
-
United States
Remote
false
Who We Are PathAI is on a mission to improve patient outcomes with AI-powered pathology. We are transforming traditional pathology methods into powerful, new technologies. These innovations in pathology can help accelerate drug development, improve confidence in the accuracy of diagnosis, and get life-saving therapies to patients more quickly. At PathAI, you'll work with a diverse and talented team of people, who are dedicated to solving complex problems and making a huge impact. Where You Fit We're looking for a technical product manager to join our team and help support the execution of PathAI’s product roadmap. The AI Product Manager will work closely with machine learning, pathology, biomedical data science, clinical affairs, quality, regulatory, and business operations teams to ensure that product development goals are being met for successful delivery of a product. The successful candidate will be able to identify, build, and champion AI products in key strategic areas for PathAI.
This role offers the opportunity to help define the product strategy of a fast growing, dynamic business. What You’ll Do Contribute to the development of the roadmap for PathAI algorithm products within a key business unit or disease area, and own the execution of assigned initiatives. Collaborate with stakeholders to gather requirements, define priorities, and align on product direction. Support the development of business cases for new products, product extensions, or platform capabilities. Partner cross-functionally with product designers, engineers, and scientists to deliver high-quality solutions that meet user needs. Monitor product performance, gather feedback, and contribute to continuous improvement initiatives. What You Bring 3+ years experience in product management (preferably bioinformatics, machine learning, or data science). Launch experience with algorithm-based products in life sciences and/or healthcare. Advanced degree in computational biology, biomedical engineering, biology, or related field preferred Experience contributing to the launch of AI-driven or software-based products. Strong collaboration skills and ability to work with cross-functional teams across technical and scientific domains. Curiosity and willingness to learn about oncology, pathology, and AI-powered products. We Want To Hear From You At PathAI, we are looking for individuals who are team players, are willing to do the work no matter how big or small it may be, and who are passionate about everything they do. If this sounds like you, even if you may not match the job description to a tee, we encourage you to apply. You could be exactly what we're looking for. PathAI is an equal opportunity employer, dedicated to creating a workplace that is free of harassment and discrimination. We base our employment decisions on business needs, job requirements, and qualifications — that's all. We do not discriminate based on race, gender, religion, health, personal beliefs, age, family or parental status, or any other status. We don't tolerate any kind of discrimination or bias, and we are looking for teammates who feel the same way.
Product Manager
Product & Operations
Apply
August 14, 2025
Member of Technical Staff, Field Eng
Anyscale
201-500
-
Anywhere
Full-time
Remote
true
About Anyscale:
At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world.
With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert.
Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date.
About the roleAs a Member of Technical Staff, Field Eng at Anyscale, you will be the technical face of our company and biggest advocate for our customers. You’ll work closely with sales, product management, and engineering to ensure that our product has the right experiences and shape to drive value in customer engagements.
The ideal candidate will be a scrappy executor, able to work independently, and lead complex POVs with our users as they adopt Anyscale and Ray. You’ll be on point for demoing our product, scoping POVs, making users successful, and amplifying the voice of our customers. Expect to learn a ton about Ray, Anyscale, and early stage product go-to-market! You’ll be fundamental in helping us disrupt what it means to build distributed applications at scale.
Our product is inherently technical, so we’re looking for folks that know how to communicate about technical products with multiple audiences – whether it’s the C-suite or an individual engineer.As part of this role, you'll be involved inBuilding awesome product demos, tutorials, reference architectures, and blogs demonstrating the strong capabilities of the Anyscale platformKnowing Ray from end-to-end and educating customers on how to use RayWorking with customers as they build applications on RayKnowing the ML landscape and toolsets that are at the forefront of cutting edge techWorking cross-functionally to be the voice of our customers and delivering impactful solutions alongside our product and engineering teams Working directly with customers at every layer of their organization (C Suite to data scientists and developers) and acting as a trusted advisor for our usersHelping us identify potential improvements and gaps in our product and prioritizing features through the development processEfficiently managing stakeholders to ensure that we stay focused on our key objective and metrics in the sales processDriving crisp communication about our products and services, both internally and externallyWorking alongside marketing and Go-to-Market to create and deliver high quality content to evangelize Anyscale and RayWe'd love to hear from you if you have5+ years of pre- and post-sales experience, either as a sales engineer, solutions engineer, architect, customer success engineer/manager, or a consultantStrong technical skills. You should have working knowledge of Python fundamentals, as well as the ability to learn Ray and build Ray applications with various librariesExperience in enterprise SaaS, infrastructure, open source, and/or ML/AI technologiesExperience in public cloud and enterprise infrastructureCustomer success orientation. You are able to demonstrate technical concepts to technical and non-technical audiences, build value with end users, and drive solutions over the lineCross-functional collaboration with technical stakeholders, including Engineering and ProductCompensation:At Anyscale, we take a market-based approach to compensation. We are data-driven, transparent, and consistent. As the market data changes over time, the target salary for this role may be adjusted. This role is also eligible to participate in Anyscale's Equity and Benefits offerings, including the following:Stock OptionsHealthcare plans, with premiums covered by Anyscale at 99% for both employees and dependents401k Retirement PlanEducation & Wellbeing StipendPaid Parental LeaveFertility BenefitsPaid Time OffCommute reimbursement100% of in-office meals coveredAnyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law.
Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish
Solutions Architect
Software Engineering
Apply
August 14, 2025
Senior Implementation Manager
Observe
201-500
USD
100000
-
135000
United States
Full-time
Remote
true
About Us Observe.AI enables enterprises to transform how they connect with customers - through AI agents and copilots that engage, assist, and act across every channel. From automating conversations to guiding human agents in real time to uncovering insights that shape strategy, Observe.AI turns every interaction into a driver of loyalty and growth. Trusted by global leaders, we’re creating a future where every customer experience is smarter, faster, and more impactful. Why Join Us The future of customer engagement is AI-driven, with conversational automation representing a $300B market opportunity. While traditional NLP/NLU-powered solutions have fallen short, Observe.AI is leading the revolution with state-of-the-art LLM-powered Conversational AI technology. Backed by 7 years of experience and insights from 300+ customers, we’re perfectly positioned to disrupt the space and unlock massive value for enterprises worldwide. We are seeking an Implementation Manager who will be responsible for project managing, facilitating stakeholder design workshops, building out customer use-cases discovered during onboarding, and delivering virtual/onsite software training to trainer and end-user audiences. The implementation manager will work with new customers to stand up their Observe.AI programs and existing customers to expand their capabilities with the Observe.AI product suite. This role will need to quickly learn and adapt the customer's business objectives into a successful onboarding process from sales handoff to CSM transition. The ideal candidate applies their SaaS experience to driving implementation projects to completion, engaging key stakeholders to build a strong program foundation, and communicating technical and functional concepts in a clear concise manner. As a key member of our core team, you’ll play a pivotal role in implementing and launching cutting-edge Conversational AI products. You’ll work at the forefront of AI innovation, helping to build solutions that will transform how global enterprises adopt and scale AI—delivering real-time customer assistance, smarter automation, and seamless interactions. This is your chance to join an industry leader as we drive the future of Conversational AI and empower enterprises to reach new levels of efficiency and customer satisfaction. What you’ll be doing Lead a customer program team through a multi-phase implementation involving business discovery, technical setup, and user training Guide and enable customers on their contact center AI journey with our software Deliver tailored customer training sessions across a variety of formats (e.g. live, recorded, virtual, in-person, train-the-trainer, end-user) Manage client expectations, project timelines, documentation deliverables, and team resources Partner with CSMs to create value-based adoption goals that set the Customer up for a successful launch Communicate project status, issues, and risks while escalating effectively as appropriate Manage multiple customer projects simultaneously Collaborate cross-functionally with Sales, Product, Marketing, and Customer Success to improve services delivery and customer experience What you'll bring to the role 5+ years of work experience implementing SaaS software Comfort in user-centered communications and facilitating technical problem solving Basic understanding of speech analytics and quality management software and processes Experience in a customer-facing role with clear deliverables and stakeholder management Proficiency in coaching and facilitation skills Training experience is highly preferred Demonstrable analytical, problem-solving, and time management skills Experience in the Software & Platform industry Experience managing complex customer implementations Perks & Benefits Competitive compensation including equity Excellent medical, dental, and vision insurance options Flexible time off 10 Company holidays + Winter Break and up to 16-weeks of parental leave 401K plan Quarterly Lifestyle Spend Monthly Mobile + Internet Stipend Pre-tax Commuter Benefits Salary Range The base salary compensation range targeted for this full-time position is $100,000 - $135,000 per annum. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives and equity (in the form of options). This salary range is an estimate, and the actual salary may vary based on the Company’s compensation practices. Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai. #LI-REMOTE
Implementation Lead
Software Engineering
Project Manager
Product & Operations
Apply
August 14, 2025
Senior Manager - Frontier Customer Development
Faculty
501-1000
-
United Kingdom
Full-time
Remote
false
About Faculty
At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With more than a decade of experience, we provide over 350 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme. Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.Why we need youFrontier is Faculty’s proprietary Decision Intelligence platform. The Frontier platform empowers organisations by integrating AI models directly into operational workflows and business decision-making processes using simulation and optimisation technologies. One of the core sectors Frontier is focused on is the Life Sciences space, which is where you’ll operate.In this role you will bridge the gap between our product’s advanced AI capabilities and real-world business impact, primarily within our Life Sciences and Healthcare space. You will lead teams of Software Engineers, Data Scientists, and Design specialists, ensuring successful configuration and deployment for enterprise AI transformation programs. Your work will drive meaningful results for our clients, turning their toughest challenges into opportunities for innovation.What you’ll be doingYour role as Senior Manager will evolve as Frontier grows. You will balance strategic vision and operational excellence, forging deep relationships with customers and delivery partners. You will:Lead transformational AI system implementations by scoping solutions that deliver customer value and by navigating complex challenges in partnership with technical colleagues.Manage enterprise life sciences customer accounts, including pricing, contract negotiations, resourcing and identifying growth opportunities.Build trust with senior stakeholders in global life sciences enterprises through delivery excellence and a deep understanding of how Frontier addresses their unique problems.Serve as the customers’ advocate within Faculty, providing feedback and insights to the product development team to enhance customer satisfaction.Create scalable delivery assets, from playbooks and education guides to process improvements that empower delivery partners and customers alike.Collaborate with the business development team to explore novel use cases and strategic growth opportunities for Frontier.Lead multidisciplinary teams of engineering, data science, and delivery personnel, driving coordination and collaboration across functions.Who we’re looking forWe don’t expect you to know exactly how to do this job when you join! We are looking for someone with the right attitude & skills to take on this challenge and adapt with it as the requirements evolve, doing what’s necessary to help Frontier grow.Your consistent high standards and versatility set you apart. You are motivated by solving challenging problems and finding a pragmatic path forwards.You need these qualifications:Extensive experience in technology consulting, product development, or similar customer-facing role.A proven track record of leading teams to deliver technically complex projects, particularly involving AI/ML technologies, leveraging technology platforms.Strong exposure in the Life Sciences industryExceptional communication skills, capable of simplifying complex concepts and fostering trust with both technical and business stakeholders.Experience managing senior customer relationships and influencing across multiple internal teams.A proactive and adaptable mindset, thriving in ambiguity and always finding solutions to drive success.It would be nice if:You have experience delivering deployed software products to complex, enterprise-scale customersYou have previously worked in agile development teams and understand how it applies to B2B product developmentYou have experience with clinical trials for drug development at top 20 Pharma companiesWhat we can offer you:
The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.
Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.
Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.
Program Manager
Software Engineering
Project Manager
Product & Operations
Apply
August 14, 2025
Senior Machine Learning Engineer
Faculty
501-1000
-
United Kingdom
Full-time
Remote
false
About Faculty
At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With more than a decade of experience, we provide over 350 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme. Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.We operate a hybrid way of working, meaning that you'll split your time across client location, Faculty's Old Street office and working from home depending on the needs of the project. For this role, you can expect to be client-side for up-to three days per week at times and working either from home or our Old street office for the rest of your time. What You'll Be DoingWorking in our Defence business unit You will design, build, and deploy production-grade software, infrastructure, and MLOps systems that leverage machine learning. The work you do will help our customers solve a broad range of high-impact problems in the defence and national security space - examples of which can be found hereYou are engineering-focused, with a keen interest and working knowledge of operationalised machine learning. You have a desire to take cutting-edge ML applications into the real world. You will develop new methodologies and champion best practices for managing AI systems deployed at scale, with regard to technical, ethical and practical requirements. You will support both technical and non-technical stakeholders to deploy ML to solve real-world problems. To enable this, we work in cross-functional teams with representation from commercial, data science, product management and design specialities to cover all aspects of AI product delivery.The Machine Learning Engineering team is responsible for the engineering aspects of our customer delivery projects. As a Machine Learning Engineer, you’ll be essential to helping us achieve that goal by:Building software and infrastructure that leverages Machine Learning;Creating reusable, scalable tools to enable better delivery of ML systemsWorking with our customers to help understand their needsWorking with data scientists and engineers to develop best practices and new technologies; andImplementing and developing Faculty’s view on what it means to operationalise ML software.We’re a rapidly growing organisation, so roles are dynamic and subject to change. Your role will evolve alongside business needs, but you can expect your key responsibilities to include:
Working in cross-functional teams of engineers, data scientists, designers and managers to deliver technically sophisticated, high-impact systems.Leading on the scope and design of projectsOffering leadership and management to more junior engineers on the team Providing technical expertise to our customersTechnical DeliveryWho We're Looking ForAt Faculty, your attitude and behaviour are just as important as your technical skill. We look for individuals who can support our values, foster our culture, and deliver for our organisation.We like people who combine expertise and ambition with optimism -- who are interested in changing the world for the better -- and have the drive and intelligence to make it happen. If you’re the right candidate for us, you probably:Think scientifically, even if you’re not a scientist - you test assumptions, seek evidence and are always looking for opportunities to improve the way we do things.Love finding new ways to solve old problems - when it comes to your work and professional development, you don’t believe in ‘good enough’. You always seek new ways to solve old challenges.Are pragmatic and outcome-focused - you know how to balance the big picture with the little details and know a great idea is useless if it can’t be executed in the real world.To succeed in this role, you’ll need the following - these are illustrative requirements and we don’t expect all applicants to have experience in everything (70% is a rough guide):Understanding of and interest in the full machine learning lifecycle, including deploying trained machine learning models developed using common frameworks such as Scikit-learn, TensorFlow, or PyTorchUnderstanding of the core concepts of probability and statistics and familiarity with common supervised and unsupervised learning techniquesExperience in Software Engineering including programming in Python.Technical experience of cloud architecture, security, deployment, and open-source tools. Hands-on experience required of at least one major cloud platformDemonstrable experience with containers and specifically Docker and KubernetesComfortable in a high-growth startup environment.Outstanding verbal and written communication.Excitement about working in a dynamic role with the autonomy and freedom you need to take ownership of problems and see them through to executionWhat we can offer you:
The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day.
Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals.
Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.
Machine Learning Engineer
Data Science & Analytics
MLOps / DevOps Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
August 14, 2025
Applied AI Researcher
Yupp
11-50
-
United States
Full-time
Remote
false
About YuppWe are a well-funded, rapidly growing, early-stage AI startup headquartered in Silicon Valley that is building a two-sided product -- one side meant for global consumers and the other side for AI builders and researchers. We work on the cutting edge of AI across the stack. Check out our product that was launched recently, and how it solves the foundational challenge of robust and trustworthy AI model evaluations. Here's more information about us.Why Join Yupp?Are you ready to have the ride of a lifetime together with some of the smartest and most seasoned colleagues? You’ll work on challenging, large-scale problems at the cutting edge of AI to build novel products that touch millions of users globally, in a massive and growing market opportunity.Yupp’s founding team is highly experienced and comes from companies like Twitter, Google, Coinbase, Microsoft and Paypal. This team is one of the smartest, most fun, cracked top talent you will ever work with. Our work culture provides a high degree of autonomy, ownership and impact. It’s intense and isn’t for everyone. But if you want to build the future of AI alongside others who are at the top of their game and expect the same from you, there’s no better AI startup to be.At Yupp, you will experience both the excitement of building for a large scale global user base as well as for the deeply technical audience of AI model builders and researchers. You’ll get immersed in and learn all about the latest and greatest AI models and agents. You’ll interact with AI builders and researchers from other AI labs all around the world.We are a mostly in-person startup, but we are also flexible – you can usually work from home when you need to and come in and leave when you want to. Many employees work from home on average 1 day a week.ResponsibilitiesResearch and track emerging trends in GenAI and LLM advancements to identify potential applications within the company.Design, build, and maintain LLM-based applications and solutions.Optionally manage the full ML lifecycle, including data analysis, preprocessing, model architecture, training, evaluation, and MLOps.Collaborate with product engineers, designers, and data scientists to define and deliver cutting-edge AI solutions.Convey complex technical information clearly to audiences with varying levels of AI expertise.Troubleshoot and debug AI applications to resolve performance and accuracy issues.Write clean, maintainable, and well-documented research and optionally production code.QualificationsBachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field (Ph.D. or equivalent experience is a plus).Minimum of 3 years of experience in machine learning, with a track record of deploying AI models in real-world applications.Strong background in modern LLM architectures and applications, and experience in using GenAI approaches in an applied, production environment.Strong programming skills in Python and familiarity with libraries such as PyTorch, TensorFlow, NumPy, and JAX.Deep understanding of machine learning algorithms, data structures, and model evaluation methods.Excellent communication and presentation skills, with the ability to explain design decisions to both experts and non-experts.Strong analytical skills and ability to work independently and collaboratively in fast-paced environments.Preferred QualificationsAuthored or co-authored research papers in reputable AI/ML conferences or impactful technical blog posts.Active participation in open-source AI/ML projects, Kaggle competitions, or similar initiatives.Experience working in startup or small, fast-paced environments.
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
August 14, 2025
Founding (Senior) Product Designer (San Francisco or New York)
Lakera AI
51-100
USD
0
170000
-
193000
United States
Full-time
Remote
false
About LakeraLakera is on a mission to ensure AI does what we want it to do. We are heading towards a future where AI agents run our businesses and personal lives. Here at Lakera, we're not just dreaming about the future; we're building the security foundation for it. We empower security teams and builders so that their businesses can adopt AI technologies and unleash the next phase of intelligent computing.We work with Fortune 500 companies, startups, and foundation model providers to protect them and their users from adversarial misalignment. We are also the company behind Gandalf, the world’s most popular AI security game.Lakera has offices in San Francisco and Zurich.We move fast and work with intensity. We act as one team but expect everyone to take substantial ownership and accountability. We prioritize transparency at every level and are committed to always raising the bar in everything we do. We promote diversity of thought as we believe that creates the best outcomes.Senior Product Designer We're looking for an experienced Senior Product Designer to lead the design of innovative products at the forefront of securing AI systems in the enterprise. Reporting into the Head of Product, you'll work at the intersection of complex security challenges, intuitive user experiences and cutting edge AI technology. You'll work closely with product managers, engineers, and cross-functional teams to create user-centered designs that address our customers’ unique challenges. This is an ideal role for a designer who thrives in complex problem spaces, loves creating intuitive solutions for technical users, and wants to shape how security teams interact with cutting-edge AI protection tools. As the first full-time product designer, you'll set the foundation for product design at Lakera.What You'll Do & Your Impact: Lead design for innovative end-to-end features that streamline workflows for enterprise security teams securing AI systemsPartner with product managers, engineers, and security experts to translate complex user needs into intuitive, user-centered experiencesCraft high-fidelity mockups and prototypes that facilitate stakeholder feedback and ensure smooth handoffs to engineeringEvolve our design system and component library to maintain consistency and delight across all features and platformsConduct user research through interviews and usability testing to identify pain points and opportunities for improvementPromote design leadership through championing design thinking throughout LakeraStay current with emerging trends in AI security technology and apply UX best practices to our unique contextWho You Are & What Makes You Qualified: 4-6 years of experience as a product designer, with a strong portfolio showcasing user-centered interfaces for complex applicationsExceptional communication skills with the ability to effectively translate between user needs and technical requirements, with internal and external stakeholdersSelf-starter mentality with proven success thriving in fast-paced environments and managing competing prioritiesStrategic mindset that connects design decisions to user insights, business objectives, and security requirementsExperience collaborating cross-functionally and adapting designs for different stakeholders and technical contextsExpert proficiency in modern design tools (Figma, etc.) and prototyping techniquesPassion for AI security and improving how security teams protect critical systemsExperience in Security or SaaS environments is a plusExperience leveraging AI in your work a plusLogisticsLocation: San FranciscoLogistics: Hybrid with 2 days per week in the office in downtown San Francisco OR Remote in New York (or east coast)Deadline to Apply: Applications reviewed on a rolling basis.Compensation & BenefitsOur total compensation package includes a competitive salary, equity, and benefits:Above-market equity grants.Health, dental, and vision insurance.401k plan.Paid parental leave.Unlimited PTO.Wellness and commuter benefits.At Lakera, we are dedicated to offering a highly competitive total rewards package, including cash compensation, equity, and comprehensive benefits. The final compensation for this role will be determined based on a variety of factors, including the scope and complexity of the position, as well as the candidate’s experience and qualifications. For roles based in San Francisco, the estimated annual base salary range is $170,000–$193,000 USD.👉 Let's stay connected! Follow us on LinkedIn, Twitter & Instagram to learn more about what is happening at Lakera.ℹ️ Join us on Momentum, the slack community for AI Safety and Security everything.❗To remove your information from our recruitment database, please email privacy@lakera.ai.
Product Designer
Creative & Design
Apply
August 14, 2025
Member of Technical Staff, Infrastructure & Scaling
Parallel
11-50
-
United States
Full-time
Remote
false
At Parallel Web Systems, we are bringing a new web to life: it’s built with, by, and for AIs. Our work spans innovations across crawling, indexing, ranking, retrieval, and reasoning systems. Our first product is a set of APIs for AIs to do more with web data. We are a fully in-person team based in Palo Alto, CA. Our organization is flat; our team is small and talent dense.We want to talk to you if you are someone who can bring us closer to living our aspirational values:Own customer impact - It’s on us to ensure real-world outcomes for our customers.Obsess over craft - Perfect every detail because quality compounds.Accelerate change - Ship fast, adapt faster, and move frontier ideas into production.Create win-wins - Creatively turn trade-offs into upside.Make high-conviction bets - Try and fail. But succeed an unfair amount.Job: You will build, operate, and scale our infrastructure, including our infrastructure around large language models, and ensure that our systems are reliable and cost-efficient as we grow. You will anticipate bottlenecks before they appear, ensure that our architecture evolves to meet increasing demands, and build the tools and systems that keep engineering velocity high.You: Have deep intuition on distributed systems, cloud platforms, performance tuning, and scalable architecture. You like to reason about trade-offs between cost, reliability, and speed of iteration. You care about your work enabling every team to build faster and ship confidently, and about infrastructure that can support products used by millions without breaking a sweat.Our founder is Parag Agrawal. Previously, he was the CEO and CTO at Twitter. Our investors include First Round Capital, Index Ventures, Khosla Ventures, and many others.
MLOps / DevOps Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
August 14, 2025
Strategic Agent Product Manager
Decagon
101-200
USD
0
240000
-
285000
United States
Full-time
Remote
false
About DecagonDecagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time.Since coming out of stealth, Decagon has experienced rapid growth. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. We've raised over $200M from Bain Capital Ventures, Accel, a16z, BOND Capital, A*, Elad Gil, and notable angels such as the founders of Box, Airtable, Rippling, Okta, Lattice, and Klaviyo.We’re an in-office company, driven by a shared commitment to excellence and velocity. Our values—customers are everything, relentless momentum, winner’s mindset, and stronger together—shape how we work and grow as a team.About the TeamOver the past few years, development of LLMs has evolved at a rapid pace. It’s not enough for our customers to just “set it and forget it” when it comes to AI software. Truly successful AI Agents require guidance and input throughout the development lifecycle.The Agent Product Management team drives this journey as Decagon’s in-house experts on building, deploying, and scaling AI agents. Agent PMs work directly with customers to bring their AI agents to life, and then grow each Agent into a core part of each company’s businesses. As one of our early APM’s, you will deploy our technology into some of the world’s most influential businesses, driving real world business impact as one part Product Manager, one part AI expert.The APM owns each step in the AI Agent build lifecycle. This can include:Collaborating with engineering to design a new product featureWriting and testing prompt logic for a specific customer use caseWorking with a customer’s executive team to define their AI roadmapYou’ll partner closely with every team at Decagon: Go-To-Market, Design, Engineering, and across our leadership team. You will own and drive your customer builds and help shape the overall product roadmap by being the voice of the customer.About the RoleWe’re looking for a Strategic APM that thrives in a highly autonomous environment. You’re product-minded, scrappy, can drive highly complex projects across cross-functional teams, and are comfortable building relationships with some of the largest brands in the world.Given their size, these strategic accounts often present the challenge of complexity, requiring the navigation of customer stakeholders across org charts. A successful Strat APM will be able to navigate this complexity seamlessly and develop deep, trusted relationships with key stakeholders at all levels of the customer organization (comfortable speaking to CX leaders, Product Leaders, and Operations Leaders).APMs at Decagon own their own portfolio of agents from end-to-end and are trusted to make real impact. You’ll have the opportunity to dive deep into complex business problems, build elegant solutions and then scale them out to millions users, all while being part of the founding Strat APM team. This role is ideal for future founders, general managers, and business unit leaders.In this role, you willBuild, Design and optimize Enterprise-quality AI agents in collaboration with Decagon’s most strategic customers — understanding their workflows, pain points, and goals.Embed deeply within strategic customers to understand their business challenges and serve as a strategic advisor to their AI roadmapRun tight feedback loops into Engineering — influence feature development based on real customer needs.Represent Decagon externally — working closely with customers and prospects, participating in key deployments.Collaborate closely with Decagon’s C-suite and other executives to continue building the playbook for strategic logos for DecagonYour background looks something like thisHave 8+ years of relevant experience. This includes but is not limited to:Senior manager or equivalent at a top-tier consulting or other professional services firmPartner at an investing firmSenior product leader or group product managerDeep technical acumen — able to understand and shape AI agent designs.Strong communication and relationship-building skills.Comfort working in fast-moving, ambiguous environments where you shape solutions as much as you implement them.Even betterA Computer Science, Engineering, or Math degree — or equivalent technical experience.An MBABenefitsMedical, dental, and vision benefitsTake what you need vacation policyDaily lunches, dinners and snacks in the office to keep you at your bestCompensation$240K – $285K + Offers Equity
Product Manager
Product & Operations
Apply
August 14, 2025
Strategic Agent Product Manager
Decagon
101-200
USD
0
240000
-
285000
United States
Full-time
Remote
false
About DecagonDecagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time.Since coming out of stealth, Decagon has experienced rapid growth. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. We've raised over $200M from Bain Capital Ventures, Accel, a16z, BOND Capital, A*, Elad Gil, and notable angels such as the founders of Box, Airtable, Rippling, Okta, Lattice, and Klaviyo.We’re an in-office company, driven by a shared commitment to excellence and velocity. Our values—customers are everything, relentless momentum, winner’s mindset, and stronger together—shape how we work and grow as a team.About the TeamOver the past few years, development of LLMs has evolved at a rapid pace. It’s not enough for our customers to just “set it and forget it” when it comes to AI software. Truly successful AI Agents require guidance and input throughout the development lifecycle.The Agent Product Management team drives this journey as Decagon’s in-house experts on building, deploying, and scaling AI agents. Agent PMs work directly with customers to bring their AI agents to life, and then grow each Agent into a core part of each company’s businesses. As one of our early APM’s, you will deploy our technology into some of the world’s most influential businesses, driving real world business impact as one part Product Manager, one part AI expert.The APM owns each step in the AI Agent build lifecycle. This can include:Collaborating with engineering to design a new product featureWriting and testing prompt logic for a specific customer use caseWorking with a customer’s executive team to define their AI roadmapYou’ll partner closely with every team at Decagon: Go-To-Market, Design, Engineering, and across our leadership team. You will own and drive your customer builds and help shape the overall product roadmap by being the voice of the customer.About the RoleWe’re looking for a Strategic APM that thrives in a highly autonomous environment. You’re product-minded, scrappy, can drive highly complex projects across cross-functional teams, and are comfortable building relationships with some of the largest brands in the world.Given their size, these strategic accounts often present the challenge of complexity, requiring the navigation of customer stakeholders across org charts. A successful Strat APM will be able to navigate this complexity seamlessly and develop deep, trusted relationships with key stakeholders at all levels of the customer organization (comfortable speaking to CX leaders, Product Leaders, and Operations Leaders).APMs at Decagon own their own portfolio of agents from end-to-end and are trusted to make real impact. You’ll have the opportunity to dive deep into complex business problems, build elegant solutions and then scale them out to millions users, all while being part of the founding Strat APM team. This role is ideal for future founders, general managers, and business unit leaders.In this role, you willBuild, Design and optimize Enterprise-quality AI agents in collaboration with Decagon’s most strategic customers — understanding their workflows, pain points, and goals.Embed deeply within strategic customers to understand their business challenges and serve as a strategic advisor to their AI roadmapRun tight feedback loops into Engineering — influence feature development based on real customer needs.Represent Decagon externally — working closely with customers and prospects, participating in key deployments.Collaborate closely with Decagon’s C-suite and other executives to continue building the playbook for strategic logos for DecagonYour background looks something like thisHave 8+ years of relevant experience. This includes but is not limited to:Senior manager or equivalent at a top-tier consulting or other professional services firmPartner at an investing firmSenior product leader or group product managerDeep technical acumen — able to understand and shape AI agent designs.Strong communication and relationship-building skills.Comfort working in fast-moving, ambiguous environments where you shape solutions as much as you implement them.Even betterA Computer Science, Engineering, or Math degree — or equivalent technical experience.An MBABenefitsMedical, dental, and vision benefitsTake what you need vacation policyDaily lunches, dinners and snacks in the office to keep you at your bestCompensation$240K – $285K + Offers Equity
Product Manager
Product & Operations
Apply
August 14, 2025
Research Staff, Voice AI Foundations
Deepgram
201-500
USD
0
150000
-
220000
Anywhere
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramThe OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The RoleAs a Member of the Research Staff, you will pioneer the development of Latent Space Models (LSMs), a new approach that aims to solve the fundamental data, scale, and cost challenges associated with building robust, contextualized voice AI. Your research will focus on solving one or more of the following problems:Build next-generation neural audio codecs that achieve extreme, low bit-rate compression and high fidelity reconstruction across a world-scale corpus of general audio.Pioneer steerable generative models that can synthesize the full diversity of human speech from the codec latent representation, from casual conversation to highly emotional expression to complex multi-speaker scenarios with environmental noise and overlapping speech.Develop embedding systems that cleanly factorize the codec latent space into interpretable dimensions of speaker, content, style, environment, and channel effects -- enabling precise control over each aspect and the ability to massively amplify an existing seed dataset through “latent recombination”.Leverage latent recombination to generate synthetic audio data at previously impossible scales, unlocking joint model and data scaling paradigms for audio. Endeavor to train multimodal speech-to-speech systems that can 1) understand any human irrespective of their demographics, state, or environment and 2) produce empathic, human-like responses that achieve conversational or task-oriented objectives. Design model architectures, training schemes, and inference algorithms that are adapted for hardware at the bare metal enabling cost efficient training on billion-hour datasets and powering real-time inference for hundreds of millions of concurrent conversations.The ChallengeWe are seeking researchers who:See "unsolved" problems as opportunities to pioneer entirely new approachesCan identify the one critical experiment that will validate or kill an idea in days, not monthsHave the vision to scale successful proofs-of-concept 100xAre obsessed with using AI to automate and amplify your own impactIf you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.
It's Important to Us That You HaveStrong mathematical foundation in statistical learning theory, particularly in areas relevant to self-supervised and multimodal learningDeep expertise in foundation model architectures, with an understanding of how to scale training across multiple modalitiesProven ability to bridge theory and practice—someone who can both derive novel mathematical formulations and implement them efficientlyDemonstrated ability to build data pipelines that can process and curate massive datasets while maintaining quality and diversityTrack record of designing controlled experiments that isolate the impact of architectural innovations and validate theoretical insightsExperience optimizing models for real-world deployment, including knowledge of hardware constraints and efficiency techniquesHistory of open-source contributions or research publications that have advanced the state of the art in speech/language AI
How We Generated This Job DescriptionThis job description was generated in two parts. The “Opportunity”, “Role”, and “Challenge” sections were generated by a human using Claude-3.5-sonnet as a writing partner. The objective of these sections is to clearly state the problem that Deepgram is attempting to solve, how we intend to solve it, and some guidelines to help you decide if Deepgram is right for you. Therefore, it is important that this section was articulated by a human. The “It’s Important to Us” section was automatically derived from a multi-stage LLM analysis (using o1) of key foundational deep learning papers related to our research goals. This work was completed as an experiment to test the hypothesis that traits of highly productive and impactful researchers are reflected directly in their work. The analysis focused on understanding how successful researchers approach problems, from mathematical foundations through to practical deployment. The problems Deepgram aims to solve are immensely difficult and span multiple disciplines and specialties. As such, we chose seminal papers that we believe reflect the pioneering work and exemplary human characteristics needed for success. The LLM analysis culminates in an “Ideal Researcher Profile”, which is reproduced below along with the list of foundational papers.
Ideal Researcher ProfileAn ideal researcher, as evidenced by the recurring themes across these foundational papers, excels in five key areas: (1) Statistical & Mathematical Foundations, (2) Algorithmic Innovation & Implementation, (3) Data-Driven & Scalable Systems, (4) Hardware & Systems Understanding, and (5) Rigorous Experimental Design. Below is a synthesis of how each paper highlights these qualities, with references illustrating why they matter for building robust, impactful deep learning models.
1. Statistical & Mathematical FoundationsMastery of Core ConceptsMany papers, like Scaling Laws for Neural Language Models and Neural Discrete Representation Learning (VQ-VAE), reflect the importance of power-law analyses, derivation of novel losses, or adaptation of fundamental equations (e.g., in VQ-VAE's commitment loss or rectified flows in Scaling Rectified Flow Transformers). Such mathematical grounding clarifies why models converge or suffer collapse.Combining Existing Theories in Novel WaysPapers such as Moshi (combining text modeling, audio codecs, and hierarchical generative modeling) and Finite Scalar Quantization (FSQ's adaptation of classic scalar quantization to replace vector-quantized representations) show how reusing but reimagining known techniques can yield breakthroughs. Many references (e.g., the structured state-space duality in Transformers are SSMs) underscore how unifying previously separate research lines can reveal powerful algorithmic or theoretical insights.Logical Reasoning and Assumption TestingAcross all papers—particularly in the problem statements of Whisper or Rectified Flow Transformers—the authors present assumptions (e.g., "scaling data leads to zero-shot robustness" or "straight-line noise injection improves sample efficiency") and systematically verify them with thorough empirical results. An ideal researcher similarly grounds new ideas in well-formed, testable hypotheses.
2. Algorithmic Innovation & ImplementationCreative Solutions to Known BottlenecksEach paper puts forth a unique algorithmic contribution—Rectified Flow Transformers redefines standard diffusion paths, FSQ proposes simpler scalar quantizations contrasted with VQ, phi-3 mini relies on curated data and blocksparse attention, and Mamba-2 merges SSM speed with attention concepts.Turning Theory into PracticeWhether it's the direct preference optimization (DPO) for alignment in phi-3 or the residual vector quantization in SoundStream, these works show that bridging design insights with implementable prototypes is essential.Clear Impact Through Prototypes & Open-SourceMany references (Whisper, neural discrete representation learning, Mamba-2) highlight releasing code or pretrained models, enabling the broader community to replicate and build upon new methods. This premise of collaboration fosters faster progress.
3. Data-Driven & Scalable SystemsEmphasis on Large-Scale Data and Efficient PipelinesPapers such as Robust Speech Recognition via Large-Scale Weak Supervision (Whisper) and BASE TTS demonstrate that collecting and processing hundreds of thousands of hours of real-world audio can unlock new capabilities in zero-shot or low-resource domains. Meanwhile, phi-3 Technical Report shows that filtering and curating data at scale (e.g., "data optimal regime") can yield high performance even in smaller models.Strategic Use of Data for Staged TrainingA recurring strategy is to vary sources of data or the order of tasks. Whisper trains on multilingual tasks, BASE TTS uses subsets/stages for pretraining on speech tokens, and phi-3 deploys multiple training phases (web data, then synthetic data). This systematic approach to data underscores how an ideal researcher designs training curricula and data filtering protocols for maximum performance.
4. Hardware & Systems UnderstandingEfficient Implementations at ScaleMany works illustrate how researchers tune architectures for modern accelerators: the In-Datacenter TPU paper exemplifies domain-specific hardware design for dense matrix multiplications, while phi-3 leverages blocksparse attention and custom Triton kernels to run advanced LLMs on resource-limited devices.Real-Time & On-Device ConstraintsSoundStream shows how to compress audio in real time on a smartphone CPU, demonstrating that knowledge of hardware constraints (latency, limited memory) drives design choices. Similarly, Moshi's low-latency streaming TTS and phi-3-mini's phone-based inference highlight that an ideal researcher must adapt algorithms to resource limits while maintaining robustness.Architectural & Optimization DetailsPapers like Mamba-2 in Transformers are SSMs and the In-Datacenter TPU work show how exploiting specialized matrix decomposition, custom memory hierarchies, or quantization approaches can lead to breakthroughs in speed or energy efficiency.
5. Rigorous Experimental DesignControlled Comparisons & AblationsNearly all papers—Whisper, FSQ, Mamba-2, BASE TTS—use systematic ablations to isolate the impact of individual components (e.g., ablation on vector-quantization vs. scalar quantization in FSQ, or size of codebooks in VQ-VAEs). This approach reveals which design decisions truly matter.Multifold Evaluation MetricsFrom MUSHRA listening tests (SoundStream, BASE TTS) to FID in image synthesis (Scaling Rectified Flow Transformers, FSQ) to perplexity or zero-shot generalization in language (phi-3, Scaling Laws for Neural Language Models), the works demonstrate the value of comprehensive, carefully chosen metrics.Stress Tests & Edge CasesWhisper's out-of-distribution speech benchmarks, SoundStream's evaluation on speech + music, or Mamba-2's performance on multi-query associative recall demonstrate the importance of specialized challenge sets. Researchers who craft or adopt rigorous benchmarks and "red-team" their models (as in phi-3 safety alignment) are better prepared to address real-world complexities.
SummaryOverall, an ideal researcher in deep learning consistently demonstrates:A solid grounding in theoretical and statistical principlesA talent for proposing and validating new algorithmic solutionsThe capacity to orchestrate data pipelines that scale and reflect real-world diversityAwareness of hardware constraints and system-level trade-offs for efficiencyThorough and transparent experimental practicesThese qualities surface across research on speech (Whisper, BASE TTS), language modeling (Scaling Laws, phi-3), specialized hardware (TPU, Transformers are SSMs), and new representation methods (VQ-VAE, FSQ, SoundStream). By balancing these attributes—rigorous math, innovative algorithms, large-scale data engineering, hardware-savvy optimizations, and reproducible experimentation—researchers can produce impactful, trustworthy advancements in foundational deep learning.
Foundational PapersThis job description was generated through analysis of the following papers:Robust Speech Recognition via Large-Scale Weak Supervision (arXiv:2212.04356)Moshi: a speech-text foundation model for real-time dialogue (arXiv:2410.00037)Scaling Rectified Flow Transformers for High-Resolution Image Synthesis (arXiv:2403.03206)Scaling Laws for Neural Language Models (arXiv:2001.08361)BASE TTS: Lessons from building a billion-parameter Text-to-Speech model on 100K hours of data (arXiv:2402.08093)In-Datacenter Performance Analysis of a Tensor Processing Unit (arXiv:1704.04760)Neural Discrete Representation Learning (arXiv:1711.00937)SoundStream: An End-to-End Neural Audio Codec (arXiv:2107.03312)Finite Scalar Quantization: VQ-VAE Made Simple (arXiv:2309.15505)Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone (arXiv:2404.14219)Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality (arXiv:2405.21060)Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Research Scientist
Product & Operations
Machine Learning Engineer
Data Science & Analytics
Apply
August 14, 2025
Research Staff, LLMs
Deepgram
201-500
USD
0
150000
-
220000
United States
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramThe OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The RoleDeepgram is currently looking for an experienced researcher to who has worked extensively with Large Language Models (LLMS) and has a deep understanding of transformer architecture to join our Research Staff. As a Member of the Research Staff, this individual should have extensive experience working on the hard technical aspects of LLMs, such as data curation, distributed large-scale training, optimization of transformer architecture, and Reinforcement Learning (RL) training.The ChallengeWe are seeking researchers who:See "unsolved" problems as opportunities to pioneer entirely new approachesCan identify the one critical experiment that will validate or kill an idea in days, not monthsHave the vision to scale successful proofs-of-concept 100xAre obsessed with using AI to automate and amplify your own impactIf you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.What You'll DoBrainstorming and collaborating with other members of the Research Staff to define new LLM research initiativesBroad surveying of literature, evaluating, classifying, and distilling current methodsDesigning and carrying out experimental programs for LLMsDriving transformer (LLM) training jobs successfully on distributed compute infrastructure and deploying new models into productionDocumenting and presenting results and complex technical concepts clearly for a target audienceStaying up to date with the latest advances in deep learning and LLMs, with a particular eye towards their implications and applications within our productsYou'll Love This Role if YouAre passionate about AI and excited about working on state of the art LLM researchHave an interest in producing and applying new science to help us develop and deploy large language modelsEnjoy building from the ground up and love to create new systems.Have strong communication skills and are able to translate complex concepts clearlyAre highly analytical and enjoy delving into detailed analyses when necessary
It's Important to Us That You Have3+ years of experience in applied deep learning research, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanismProven experience working with large language models (LLMs) - including experience with data curation, distributed large-scale training, optimization of transformer architecture, and RL LearningStrong experience coding in Python and working with PytorchExperience with various transformer architectures (auto-regressive, sequence-to-sequence.etc)Experience with distributed computing and large-scale data processingPrior experience in conducting experimental programs and using results to optimize modelsIt Would Be Great if You HadDeep understanding of transformers, causal LMs, and their underlying architectureUnderstanding of distributed training and distributed inference schemes for LLMsFamiliarity with RLHF labeling and training pipelinesUp-to-date knowledge of recent LLM techniques and developmentsPublished papers in Deep Learning Research, particularly related to LLMs and deep neural networksBacked by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Research Scientist
Product & Operations
Machine Learning Engineer
Data Science & Analytics
Apply
August 14, 2025
Research Staff, Data Science
Deepgram
201-500
USD
0
150000
-
220000
Anywhere
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramThe OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The RoleDeepgram is currently looking for seasoned Data Scientists with demonstrated experience solving hard data problems while exploring research frontiers to join our Research Staff. Conversational audio presents incredibly rich scientific, engineering, and infrastructure challenges that are orders of magnitude harder than working with text. As a Member of the Research Staff, you will help us to build an industrial “data factory” that will be used to power the next generation of Voice AI systems - unlocking the creation of models that go beyond basic transcription and comprehension, capturing nuanced meanings in complex conversations, adapting robustly to diverse speech patterns, and generating empathic responses with human-like, contextualized speech. You will collaborate closely with our product, engineering, and data teams to build and deploy models in the most scalable voice API on the planet. We look forward to you bringing your expertise, sharing insights from your latest experiments, and collaborating with us to push the boundaries of AI and voice technology.The ChallengeWe are seeking Research Staff who:See "unsolved" problems as opportunities to pioneer entirely new approachesCan identify the one critical experiment that will validate or kill an idea in days, not monthsHave the vision to scale successful proofs-of-concept 100xAre obsessed with using AI to automate and amplify your own impactIf you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.What You'll DoDrive high performance data acquisition, preparation and synthesis pipelines to generate data for the next generation of speech and language AI foundation modelsDevelop advanced characterizations of complex conversational audio utilizing a diverse toolkit of signals processing techniques and deep learning modelsCollaborate with DataOps and Engineering to create automated systems which scale the ability of human annotators to label high value data and provide critical feedback on model outputsBuild advanced benchmarking methodologies and curated datasets for evaluating conversational voice systemsDocument and present results of data experiments and analysis for internal and external audiencesYou’ll Love This Role If YouAre obsessed with making sense out of complex and/or messy dataEnjoy building from the ground up and love to create new systems from scratchAre passionate about AI and interested in leveraging data to solve hard problemsAre motivated by the prospect of scaling yourself using automation and AI models
It's Important to Us That You HaveExperience building data processing pipelines from a blank page and owning the entire data stack including data acquisition, characterization, cleaning, serving and transformationExperience and expertise applying statistical methods and deep learning models to understand complex dataStrong communication skills and the ability to translate complex concepts in simple terms, depending on the target audienceStrong software engineering skills with particular emphasis on developing clean, modular code in Python and working with PytorchNice to HavesBackground in Physics, Mechanical Engineering or Language ProcessingExperience building modelsSpeech and audio experienceBacked by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Data Scientist
Data Science & Analytics
Machine Learning Engineer
Data Science & Analytics
Apply
August 14, 2025
Research Staff, Machine Learning Engineer
Deepgram
201-500
USD
0
150000
-
220000
Anywhere
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramThe OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The RoleDeepgram is seeking a highly skilled and versatile Machine Learning Engineer to join our Research Staff team. As a Member of the Research Staff, this role focuses on scaling training systems for speech related technologies, building internal tools, and driving innovation in data strategies. You'll work at the intersection of machine learning, data infrastructure, and internal tooling to support our mission of building world-class speech recognition and synthesis systems.Key ResponsibilitiesScalable Model Training: Architect and manage horizontally scalable training systems for Speech to Text (STT) and Text to Speech (TTS) models across diverse domains, including, but not limited to: non-english languages, use cases, and customer-centric. These systems include data preparation and management, training pipelines, and automated evaluation tooling.
Tooling & Accessibility: Design and implement internal UIs and tools that make ML systems and workflows accessible to non-technical stakeholders across the company. These UIs should be designed to provide transparency and flexibility to internally built tooling.Infrastructure & Tools: Oversee and manage training tooling, job orchestration, experiment tracking, and data storage.The ChallengeWe are seeking Members of the Research Staff who:See "unsolved" problems as opportunities to pioneer entirely new approachesCan identify the one critical experiment that will validate or kill an idea in days, not monthsHave the vision to scale successful proofs-of-concept 100xAre obsessed with using AI to automate and amplify your own impactIf you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.
It's Important to Us That You HaveStrong experience in training large-scale machine learning systems, particularly in STT or related speech domains.Proficiency with orchestration and infrastructure tools like Kubernetes, Docker, and Prefect.Familiarity with ML lifecycle tools such as MLflow.Experience building internal tools or dashboards for non-technical users.Hands-on experience with data engineering practices for unstructured audio and text data.Comfortable working in cross-functional teams that include researchers, engineers, and product stakeholders.
Nice to HaveDeep understanding of evaluation metrics and benchmarking techniques for ASR and/or TTS systems.Why Join Deepgram?At Deepgram, you’ll help shape the future of human–machine communication. Our research culture prioritizes ownership, experimentation, and real-world impact. As a Member of the Research Staff, you'll be empowered to build tools and systems that accelerate ML research and product deployment at scale.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Machine Learning Engineer
Data Science & Analytics
Apply
August 14, 2025
No job found
Your search did not match any job. Please try again