Top AI Engineer Jobs Openings in 2025
Looking for opportunities in AI Engineer? This curated list features the latest AI Engineer job openings from AI-native companies. Whether you're an experienced professional or just entering the field, find roles that match your expertise, from startups to global tech leaders. Updated everyday.
Medical AI Specialist
Heidi Health
201-500
0
0
-
0
Australia
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleWe’re looking for a qualified doctor or clinician with AI/ML expertise to join our Medical Knowledge (MK) team as a Medical AI Specialist. You’ll bring your clinical expertise as well as technical expertise in software, AI, and machine learning into a fast-paced, hyper-growth startup environment. In this role you’ll:Help shape our AI products by infusing medical insights with deep technical knowledgeCollaborate across teams to turn clinician needs into reliable, high-impact AI featuresWhat you’ll do:Own the clinical integrity of Heidi: advise clinicians, troubleshoot edge cases, and validate outputsPartner with product, engineering and design teams to engineer prompts, build agentic AI, create model evaluation frameworks, and fine-tune models.Build cutting-edge AI features within Heidi using the latest tools and frameworksPlan and collaborate on research studies that prove our AI scribe’s clinical accuracy and user impactAdvise leadership on clinical quality and product roadmap prioritiesMentor colleagues and help build a culture of clinical excellence and collaborationWhat we will look for:Clinician with AI product experience
Previous hands-on experience building scalable AI products or features for healthcare settingsLLM & foundation-model expertise
Deep experience with large language models and provider APIs (OpenAI, Claude, Gemini), building LLM-powered healthcare products and featuresPrompt engineering & safety
Proven track record crafting, iterating, and testing prompts; mitigating bias and hallucinations; embedding clinical safety guardrailsCross-functional collaboration
Translate clinical needs into technical specs, partner closely with engineering and product teams, and embed quality controls for clinician trustPerformance optimization
Optimise AI features for accuracy, latency, scalability, and cost (“fast, safe, cheap”)RAG pipelines & evaluation
Build and maintain retrieval-augmented generation pipelines and automated evaluation workflows (traditional ML metrics, LLM-as-judge, A/B tests, clinician feedback loops)AI agent design & fine-tuning
Design, build, and orchestrate multi-step AI agents; fine-tune or adapt models on medical datasetsSemantic search & vector DBs
Hands-on experience with embeddings, vector databases, and agent frameworks like LangChainSoftware engineering & ML fundamentals: Fundamental understanding of Python, JavaScript, SQL; experience with cloud platforms and ML frameworksWhat do we believe in?We create unconventional solutions to difficult problems and we build them fast. We want you to set impossible goals and make them happen, think landing a rocket but the medical version.You'll be surrounded by a world-class team of engineers, medicos and designers to do your best work, inspired by our shared beliefs:We will stop at nothing to improve patient care across the world.We design user experiences for joy and ship them fast.We make decisions in a flat hierarchy that prioritises the truth over rank.We provide the resources for people to succeed and give them the freedom to do it.Why you will flourish with us 🚀?Flexible hybrid working environment, with 3 days in the office.Additional paid day off for your birthdaySpecial corporate rates at Anytime Fitness in Melbourne, Sydney tbc.A generous personal development budget of $500 per annumLearn from some of the best engineers and creatives, joining a diverse teamBecome an owner, with shares (equity) in the company, if Heidi wins, we all winThe rare chance to create a global impact as you immerse yourself in one of Australia’s leading healthtech startupsIf you have an impact quickly, the opportunity to fast track your startup career!Help us reimagine and change the face of healthcare around the world.
No items found.
Apply
October 31, 2025
Staff AI Engineer (Orchestration)
Heidi Health
201-500
0
0
-
0
Australia
Full-time
Remote
false
Who are Heidi?Heidi is building an AI Care Partner that supports clinicians every step of the way, from documentation to delivery of care.We exist to double healthcare’s capacity while keeping care deeply human. In 18 months, Heidi has returned more than 18 million hours to clinicians and supported over 73 million patient visits. Today, more than two million patient visits each week are powered by Heidi across 116 countries and over 110 languages.Founded by clinicians, Heidi brings together clinicians, engineers, designers, scientists, creatives, and mathematicians, working with a shared purpose: to strengthen the human connection at the heart of healthcare.Backed by nearly $100 million in total funding, Heidi is expanding across the USA, UK, Canada, and Europe, partnering with major health systems including the NHS, Beth Israel Lahey Health, MaineGeneral, and Monash Health, among others.We move quickly where it matters and stay grounded in what’s proven, shaping healthcare’s next era. Ready for the challenge?The RoleYou will operate as a Staff+ AI scientist/engineer on the Orchestration team. You will own the design and delivery of a clinician‑grade retrieval and question‑answering stack across data ingestion, indexing, ranking, grounding, and safe deployment. You will set technical direction, establish quality bars, and lead cross‑functional execution with engineering, product, and clinical experts. You will move between research and production, turning prototypes into reliable services with clear SLAs, traceable outputs, and unit/acceptance metrics that matter in clinical contexts.What you’ll do:Define the end‑to‑end architecture for literature and guideline ingestion, normalization, metadata extraction, de‑duplication, and versioning.
Build hybrid search and retrieval: lexical + vector + re‑ranking, with tight latency budgets and cost controls.
Design grounding and answer synthesis that cite sources, preserve provenance, and expose confidence and abstention.
Lead model work across prompting, fine‑tuning, distillation, and tool use to improve faithfulness, coverage, and utility.
Stand up gold‑standard evaluation: offline IR metrics (nDCG, MAP, recall), factuality/faithfulness audits, and human review with adjudication.
Run online experiments at scale. Define guardrails, KPIs, and ship A/Bs to measure impact on clinician workflows.
Productionize services with observability, tracing, canaries, rollbacks, and incident playbooks.
Set data governance for medical content: access control, PHI handling, audit logs, and retention policies.
Partner with clinicians to define intents, schemas, and acceptance criteria. Convert ambiguous questions into testable specs.
Coach engineers and scientists. Raise the technical bar through design docs, reviews, and reusable components.What we will look for:Staff‑level track record shipping search, NLP, or LLM systems that serve real users at scale.
Mastery of Python and SQL. Strong software engineering fundamentals, testing strategy, and API/service design.
Depth in modern IR/NLP: embeddings, ANN indexes, re‑rankers, retrieval‑augmented generation, and prompt/program synthesis.
Experience building data pipelines: parsing PDFs/HTML, OCR when needed, metadata extraction, and content hashing/versioning.
Familiarity with PyTorch, plus distributed training/inference patterns.
MLOps and reliability: containers, Kubernetes, feature/model registries, experiment tracking, monitoring, and alerting.
Evidence of rigorous evaluation design: offline metrics, human‑in‑the‑loop judging, power analysis for online tests.
Clear thinking on safety: hallucination controls, calibration, abstention, red‑teaming, and privacy/security by design.
Ability to lead cross‑functional initiatives and make crisp decisions with incomplete information.Bonus:Search relevance expertise for long‑form, citation‑heavy domains.
Knowledge of biomedical ontologies and standards (e.g., SNOMED CT, UMLS, ICD, RxNorm, FHIR).
Prior work with literature and guideline corpora, de‑duplication, and document lineage tracking.
Experience with hybrid retrieval stacks (e.g., BM25 + ANN) and learned re‑rankers.
Familiarity with clinical evaluation methods, EBM hierarchies, and annotation workflows.
Strong cost‑performance tuning for LLM inference, caching, and batching in production.What do we believe in?We create unconventional solutions to difficult problems and we build them fast. We want you to set impossible goals and make them happen, think landing a rocket but the medical version.You'll be surrounded by a world-class team of engineers, medicos and designers to do your best work, inspired by our shared beliefs:We will stop at nothing to improve patient care across the world.We design user experiences for joy and ship them fast.We make decisions in a flat hierarchy that prioritizes the truth over rank.We provide the resources for people to succeed and give them the freedom to do it.Why you will flourish with us 🚀?Flexible hybrid working environment, with 3 days in the office.Additional paid day off for your birthday and wellness daysSpecial corporate rates at Anytime Fitness in Melbourne, Sydney tbc.A generous personal development budget of $500 per annumLearn from some of the best engineers and creatives, joining a diverse teamBecome an owner, with shares (equity) in the company, if Heidi wins, we all winThe rare chance to create a global impact as you immerse yourself in one of Australia’s leading healthtech startupsIf you have an impact quickly, the opportunity to fast track your startup career!Help us reimagine primary care and change the face of healthcare in Australia and then around the world.
No items found.
Apply
October 31, 2025
AI Engineer, London
Eloquent AI
11-50
0
0
-
0
United Kingdom
Full-time
Remote
false
Meet Eloquent AIAt Eloquent AI, we’re building the next generation of AI Operators—multimodal, autonomous systems that execute complex workflows across fragmented tools with human-level precision. Our technology goes far beyond chat: it sees, reads, clicks, types, and makes decisions—transforming how work gets done in regulated, high-stakes environments.We’re already powering some of the world’s leading financial institutions and insurers, fundamentally changing how millions of people manage their finances every day. From automating compliance reviews to handling customer operations, our Operators are quietly replacing repetitive, manual tasks with intelligent, end-to-end execution.Headquartered in San Francisco with a global footprint, Eloquent AI is a fast-growing company backed by top-tier investors. Join us to work alongside world-class talent in AI, engineering, and product as we redefine the future of financial services.As an AI Engineer at Eloquent AI, you will be at the forefront of building and scaling AI-powered applications, transforming how enterprises interact with intelligent agents. You’ll work across the entire stack, developing high-performance front-end experiences and scalable back-end systems that power real-time AI-driven workflows.This role requires strong software engineering skills, a deep understanding of full-stack development, and the ability to work in a fast-paced, AI-first environment. You’ll collaborate with engineers, AI researchers, and product teams to build enterprise-grade solutions that enable seamless AI-human interactions.You will:Design and build full-stack applications that power AI-driven workflows for enterprise users.Develop high-performance front-end interfaces for AI agent control, monitoring, and visualization.Build scalable backend services that support real-time AI interactions, knowledge retrieval, and automation.Optimize AI-powered UIs to ensure seamless and intuitive user experiences.Work closely with AI researchers and ML engineers to integrate LLMs, Retrieval-Augmented Generation (RAG), and automation into production-ready applications.Ship robust, minimal-dependency code that performs efficiently in enterprise environments.Continuously iterate and refine AI-driven products, balancing user needs with technical feasibility.Requirements2-5 years of hands-on experience building full-stack production applications.Proficiency in modern web technologies, including React, TypeScript, and Node.js.Experience with backend development using Python, Go, or similar languages.Strong knowledge of cloud infrastructure (AWS, GCP, or Azure) and scalable architectures.Understanding of AI-powered applications, including LLMs, chat interfaces, and agentic workflows.Ability to work in a fast-paced, high-autonomy environment, solving complex problems and delivering impact.Strong collaboration skills, with experience working in cross-functional teams with engineers, designers, and AI researchers.Bonus Points If…You have experience building AI-powered applications with LLM integrations.You’ve worked in high-performance startups or enterprise AI environments.You have a sharp eye for UI/UX design and have built intuitive, AI-driven interfaces.You have experience with GraphQL, WebSockets, or real-time data streaming.You’ve contributed to open-source projects or have built developer tools for AI.
No items found.
Apply
October 27, 2025
Implementation Engineer
Decagon
101-200
USD
275000
185000
-
275000
United States
Full-time
Remote
false
About DecagonDecagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time.Since coming out of stealth, Decagon has experienced rapid growth. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. We've raised over $200M from Bain Capital Ventures, Accel, a16z, BOND Capital, A*, Elad Gil, and notable angels such as the founders of Box, Airtable, Rippling, Okta, Lattice, and Klaviyo.We’re an in-office company, driven by a shared commitment to excellence and velocity. Our values—customers are everything, relentless momentum, winner’s mindset, and stronger together—shape how we work and grow as a team.About the RoleYou’ll be part of a global group assisting our breadth of customers with the technical details of how they use Decagon. Working directly with leaders across industries such as finance, healthcare, and hospitality, you’ll design reliable, intuitive AI agents that solve real-world user needs.This is a highly technical role married with the ability to deeply comprehend our customers workflows. You’re responsible, both; for solving our customers thorniest technical issues as they arise and for implementing new functionality and growing Decagon use inside of our existing customers.In this role, you will:Design and build AI agents that outperform human agents in managing complex customer interactions and driving customer retentionDevelop mastery of Decagon’s products and their integrations with other platformsUnderstand challenging business cases and build workflows and connectors to our customers’ APIsIdentify cross-customer trends to guide the evolution of Decagon’s agent-building platform and research effortsYou’ll be a good fit if you:Have 2+ years of industry experience in software engineeringAre proficient in PythonAre eager to deeply understand our technologies and how they interface with our customersBenefitsMedical, dental, and vision benefitsTake what you need vacation policyDaily lunches, dinners and snacks in the office to keep you at your bestCompensation$185K – $275K + Offers Equity
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
October 27, 2025
Staff AI Software Engineer
Fiddler AI
101-200
USD
300000
190000
-
300000
United States
Full-time
Remote
false
Our PurposeAt Fiddler, we understand the implications of AI and the impact that it has on human lives. Our company was born with the mission of building trust into AI. The rise of Generative AI and Agents has unlocked generalized intelligence but also widened the risk aperture and made it harder to ensure that AI applications are working well. Fiddler enables organizations to get ahead of these issues by helping deploy trustworthy, and transparent AI solutions. Fiddler partners with AI-first organizations to help build a long-term framework for responsible AI practices, which, in turn, builds trust with their user base. AI Engineers, Data Science, and business teams use Fiddler AI to monitor, evaluate, secure, analyze, and improve their AI solutions to drive better outcomes. Our platform enables engineering teams and business stakeholders alike to understand the "what", “why”, and "how" behind AI outcomes. Our FoundersFiddler AI is founded by Krishna Gade (engineering leader at Facebook, Pinterest, Twitter, and Microsoft) and Amit Paka (product leader at Microsoft, Samsung, Paypal and two-time founder). We are backed by Insight Partners, Lightspeed Venture Partners, and Lux Capital. Why Join UsOur team is motivated to help build trust into AI to enable society harness the power of AI. Joining us means you get to make an impact by ensuring that AI applications at production scale across industries have operational transparency and security. We are an early-stage startup and have a rapidly growing team of intelligent and empathetic doers, thinkers, creators, builders, and everyone in between. The AI and ML industry has a rapid pace of innovation and the learning opportunities here are monumental. This is your chance to be a trailblazer. Fiddler is recognized as a pioneer in the field of AI Observability and has received numerous accolades, including: 2022 a16z Data50 list, 2021 CB Insights AI 100 most promising startups, 2020 WEF Technology Pioneer, 2020 Forbes AI 50 most promising startups of 2020, and a 2019 Gartner Cool Vendor in Enterprise AI Governance and Ethical Response. By joining our brilliant (at least we think so) team, you will help pave the way in the AI Observability space.👩🏽🚀 The Mission:Our AI engineers make a real impact on the safety and ROI of large language models and agentic applications across different verticals and domains. You will work on the cutting edge of envisioning and building new types of tools and algorithms to monitor, explain, and improve such applications and in turn empower our customers.🪐 About The Team:Our engineering team is a dynamic group of builders and thinkers dedicated to solving some of the most cutting-edge challenges in AI safety and reliability. Working on exciting and an expansive range of topics, from the responsible deployment of machine learning models, large language models (LLMs), to complex agentic applications. Our projects are inherently cross-disciplinary, requiring expertise in systems engineering, product engineering, and data science to build robust, scalable solutions. We thrive in a collaborative environment where continuous learning is at the forefront, ensuring every team member stays on their toes with the latest advancements in AI. Joining our team means you'll have the opportunity to make a tangible impact on how AI evolves for the benefit of humanity.🚀 What You’ll Do:Design and build core services and components of a world-class cloud platform to help enterprises develop, monitor and improve their full suite of AI based applications (covering predictive models, LLMs, GenAI models and agentic applications)Lead the design and implementation of distributed systems, microservices and applications that compute, persist, and expose new ML + agentic observability metrics (e.g., response relevancy, hallucination scores) from raw trace dataSpearhead the development of new types of metrics and evaluation capabilities to satisfy evolving customer needs around agentic applications. Take part in conversations with customers around discovery and supportDeveloper in-house AI Agents and GenAI capabilities to augment the Fiddler observability productsDefine and evolve the operational maturity (reliability, observability, SLOs, observability) of core services and components, establish best practices and champion improvements across the teamTeam & Culture Building: you will take an active role in building a world-class engineering team and actively participate in the talent acquisition process through interviewing, candidate evaluation and coaching
🎯 What We’re Looking For:Masters or Bachelors degree in Computer Science or related field, combined with 7+ years of industry experience, with demonstrated solid foundation in software development.Experience with deploying and working with ML/LLM models in production.Experience building, deploying and monitoring agentic applications using common frameworks like Langchain, Google ADK, Amazon Strands, OpenAI and building and integrating MCP ServersHands-on experience working with OpenTelemetry, distributed tracing and LLM-as-a-judge techniquesDeep proficiency with Python and a strong command of essential backend technologies like Postgres, Redis, Kafka, RabbitMQ, Ray. This includes the ability to design, build, and debug complex, large-scale systems.Adaptability & Ownership: proven ability to thrive in ambiguity and a fast-paced environment. We need a self-motivated initiator who can take ownership of projects with a high degree of autonomy, confidently filling in the gaps when the full picture isn't available.System Design & Optimization: A strong grasp of distributed systems and the capacity to troubleshoot production issues.Technical Leadership & Collaboration: Demonstrated ability to plan, execute, and deliver projects by effectively breaking down complex problems into manageable tasks, and guiding a small team of engineers. Must be adept at cross-functional collaboration across a geographically distributed team, working closely with product managers, designers, frontend developers, and data scientists to ensure alignment and successful project outcomesCoaching & Mentorship: you should be an excellent collaborator and a mentor to other team members, raising the technical bar for the entire team and regularly engage in code and design reviews.Ability to work in our Palo Alto office 3 days a week🫱🏼🫲🏾 Compensation: $190,000 - $300,000 + equity + benefitsThe posted range represents the expected salary range for this job requisition and does not include any other potential components of the compensation package and perks previously outlined. Ultimately, in determining pay, we'll consider your experience, leveling, location, and other job-related factors.Fiddler is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. If you require special accommodations in order to complete the interviews or perform job duties, please inform the recruiter at the beginning of the process.Beware of job scam fraud. Our recruiters use @fiddler.ai email addresses exclusively. In the US, we do not conduct interviews via text or instant message, or ask for sensitive personal information such as bank account or social security numbers.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
October 25, 2025
AI Engineer - Agent Team
FurtherAI
11-50
USD
250000
150000
-
250000
United States
Full-time
Remote
false
About UsAt FurtherAI, we’re building the next generation of AI agents for the insurance industry - a trillion-dollar market ready for transformation.We’ve raised more than $30M from top investors (Andreesen Horowitz, YC, Nexus, South Park Commons, Converge) and have grown 10x in revenue this year alone. Our customers include some of the largest names in insurance (we recently closed a top 5 insurance company in the world), and our team combines deep AI expertise with proven company-building experience.Now, we’re looking for an exceptional AI Engineer to join our agent team and help shape both our product.Why Join UsRocketship Growth: Post-PMF, with revenue growing at an exceptional pace.Elite Team: Team includes ex-Apple AI Research, 4 YC founders, and 6 ex-founders. Engineers have prior experience at Apple, Microsoft, Google, and Amazon.Massive Market Impact: Insurance is the backbone of global commerce - our work reshapes how this industry operates.Founder’s Mindset: Perfect fit if you want to own big pieces of the product and possibly start your own company in the future.What You’ll DoSet up end-to-end evals to measure & improve agent performance.Experiment with new agentic techniques (e.g. multi-agent systems, reasoning-from-feedback, RFT, etc)Build lightweight tools, servers, and orchestration layers (e.g. MCP servers) that enable agents to operate reliably in productionStay on top of emerging research and blogs on LLM/AI agents and bring ideas into production experiments.
What We’re Looking ForAmazing ability to speak with LLMs - Occam's razor in prompting Strong experience with Python 3+ years building in ML/AI.Clear communicator - both in person and in writing.Bonus: background in B2B SaaS and 0 - 1 experienceAbove all: drive, grit, and ownership. If you excel here, other requirements are secondaryNote that this is not a model-training role - you’ll be building orchestration and reasoning systems on top of existing LLMs (think Claude-Code over Claude-Model).At FurtherAI, we set a high bar. We’re not looking for someone who just wants a job. We’re looking for someone who wants to build something transformative. If you thrive in environments where the pace is fast, expectations are high, and the rewards are outsized, you’ll love it here.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
October 23, 2025
AI Engineer
Hiya
201-500
USD
262000
134000
-
262000
United States
Full-time
Remote
false
About UsAt Hiya, we’re revolutionizing voice communication. Our mission is to modernize voice with intelligence for security and productivitySince 2015, when we introduced the first mobile caller ID and spam-blocking apps, we’ve been at the forefront of voice intelligence innovation. In 2016, we partnered with Samsung and AT&T to launch Hiya Protect, the first network-based spam-blocking solution. In 2019, we introduced Hiya Connect, a branded call SaaS platform that helps businesses reach more customers by phone.Today, our Voice Intelligence Platform supports over 500 million users globally. By using adaptive AI and audio intelligence, it delivers smarter, safer, and more productive voice calls across networks, apps, and devices. Our network & solution partners have grown to include British Telecom, EE, Virgin Media O2, Ericsson, Rogers, Bell Canada, MasMovil, Telenor, FICO, Twilio, and more.About the PositionAI Engineers are responsible for developing and integrating AI solutions into Hiya’s products, focusing on rapid iteration, prompt engineering, and practical application. You'll fine-tune and optimize foundation models, craft sophisticated multi-agent systems, and invent novel solutions to power the next generation of voice intelligence.What You’ll DoIntegrate AI solutions into existing products and workflowsCollaborate with cross-functional teams to understand business requirements and translate them into technical solutionsConduct model evaluations, prompt engineering, and fine-tuning of large language models (LLMs)Implement and manage AI orchestration, including agent-based systemsParticipate in the design and implementation of AI-powered applications and interfacesHelp shape the technical direction and best practices for LLM application developmentStay at the forefront of AI research and incorporate state-of-the-art techniquesWhat You’ll Need to SucceedProficiency in programming languages such as Python, JavaScript, or TypeScriptExperience working with foundational model APIs and pre-trained open source modelsStrong understanding of machine learning workflows, including model evaluations and LLM fine-tuningFamiliarity with AI orchestration and agent-based systems and best practices (LangChain, AutoGen, n8n)Excellent problem-solving skills and the ability to work independently and collaboratively.Strong communication skills and the ability to translate technical concepts to non-technical stakeholdersThe person in this role must embody Hiya’s key values of Serving our customers, Doing rather than observing, Improving ourselves and our business, Owning and holding ourselves accountable for success, and Leading by showing up with a point of view, engaging in open discussion, listening respectfully to others opinions and committing to decisions. You will have a fast start if you have experience:Experience with cloud platforms such as AWS, Google Cloud, or AzureKnowledge of Kubernetes and containerization technologiesExperience with data science and ML engineeringFamiliarity with retrieval-augmented generation (RAG)The requirements listed in the job descriptions are guidelines. You don’t have to satisfy every requirement or meet every qualification listed. If your skills are transferable we would still love to hear from you.More DetailsThe salary range for this role is between $134,000-$262,000. When determining compensation, a number of factors will be considered: skills, experience, job scope, location, and competitive compensation market data.Start Date: ImmediatelyStatus: Full-time Type: HybridLocation: Seattle, WA Travel Requirements: Department: EngineeringReports to: VP of Engineering BenefitsEquity compensation401K program with 3% match through Fidelity InvestmentsSelf managed vacation plan 15 Paid holidays including Recharge Days100% covered medical, dental, and vision for the employee and 50% coverage for dependentsFlexible spending, health savings accounts and Pretax dependent day care savings planPaid parental leaveVoluntary Life and AD&D, and Accident insurance optionsEmployer-paid life insuranceEmployer-paid long-term disability coverage (in qualifying states)Donation Matching for a charity of your choice (up to $1,000/ year)$1,000/year reimbursement in Professional Development fundsThis position is based in Seattle, WA, USA.We are building a team with a variety of perspectives, identities, and professional experiences. We evaluate great candidates through a business lens and we strongly believe that diversity and unique perspectives make our company stronger, more dynamic, and a great place to build a career.Our team has won various awards over the last 4 years from Built-in Seattle and Seattle Business Week to #86 on Deloitte Technology Fast 500 and Forbes #1 Startup Employer. Here at Hiya, we are a people-centric company focused on helping each and every one of our employees grow both personally and professionally. We feel that creating a team culture of support and empowerment to challenge the status quo results in an energized and passionate team that is continuously challenged and passionate about the work they are doing. You'll love working here if you are looking for an innovative challenge that is disrupting an industry. Come join us!
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
October 22, 2025
AI Deployment Engineer
Bland
51-100
USD
175000
120000
-
175000
United States
Full-time
Remote
false
AI Deployment Engineer About Bland AIWe’re a series B startup, and have raised $65 million from Emergence Capital, Y Combinator, and the founders of PayPal and Twilio. We have grown to a 80+ person team, and we serve customers like Better.com, by delivering the most friendly, helpful, and human-like AI phone agents in the world.Why This Role ExistsEvery customer is different — the problems they’re solving, the data they have, and the impact they want to create with AI. We’re looking for someone who thrives in that ambiguity, who can take fuzzy goals and transform them into working, production-ready agents that deliver real value. You’ll work directly with customers to understand their workflows, build tailored agents, and help shape how Bland is adopted across their organization.What You’ll DoDesign and deploy AI solutions. Work closely with customers to translate their challenges into functional agents, integrating APIs and data sources to automate real business processes.Prototype fast, iterate faster. Build the first version quickly, get it into production, and refine based on real-world feedback.Collaborate deeply. Partner with customer teams across engineering, product, and operations to ensure the agent performs, scales, and delivers measurable outcomes.Own end-to-end delivery. From discovery call to deployment, you’ll lead the technical build, testing, and iteration — ensuring the experience feels natural, human, and on-brand.Drive adoption and expansion. Share results, train teams, and embed yourself within the customer organization to uncover new opportunities for automation and scale.Be the face of Bland. You are the customer’s champion, their best employee, and you treat them with unreasonable hospitality. You travel on-site, get to know our customers on a human level, and develop real relationships with our champions and other stakeholders, going above and beyond to host training sessions and dinners.
Must-Have Qualifications1–5 years of experience in full-stack, AI, or solutions engineering roles where you owned builds from concept to production.Proven experience building and integrating AI or automation features into real-world applications.Hands-on experience integrating LLMs or AI SDKs into web applications.Strong comfort working with REST/JSON, scripting languages (Python or JavaScript), and modern dev tools (Git, NPM/PNPM).Track record of ownership and grit — moments where you built something from scratch, solved hard problems, or exceeded expectations.Excellent communication skills — you can explain complex AI concepts clearly to both technical and non-technical audiences.Thrives in a fast-moving, high-intensity environment — motivated by challenge, curiosity, and the pursuit of great work.Willingness to work in-person frequently (SF HQ and customer sites) to collaborate and accelerate learning.
Nice-to-HavesHands-on experience experimenting with or deploying LLM-powered tools or agents.Prior startup or founder experience — you know what it takes to build without a playbook.Curiosity about AI agent design, orchestration, and automation systems.Experience working directly with customers to identify pain points and turn them into shipped solutions.A portfolio of personal or side projects that showcase creativity, technical depth, and persistence.Exceptional new grads with strong ownership and ambition are welcome to apply.
Why you'll love working hereYou’ll be joining one of the fastest-moving AI teams in the world — where speed, creativity, and ownership are the default. You’ll ship real agents used by real customers, working directly with teams that are transforming how humans and AI collaborate.This role is perfect for someone who loves building, learning, and pushing boundaries a hands-on candidate who wants to see their work in the wild, solving real problems for real people.Relentlessness is the most important qualityIf you think you’re missing relevant experience but you’re a fast learner who’s excited for a new challenge – and you have the intangibles our team is looking for – please reach out. As long as you’re resourceful and a fast learner (and you can prove it to our team) we would love to meet you.Compensation & PerksSalary: $120k – $175k base + meaningful equity + benefits.
Gorgeous office in Jackson Square, San Francisco (rooftop views & great coffee shops nearby).
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
October 15, 2025
Deployment Engineer, AI Inference
Cerebras Systems
501-1000
0
0
-
0
United States
Canada
Full-time
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About Us Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In 2024, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking a highly skilled Deployment Engineer to build and operate our cutting-edge inference clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of the new software versions and AI replica updates, along the capacity reallocations across our custom-built, high-capacity datacenters.
Beyond operations, you’ll drive improvements to our telemetry, observability and the fully automated pipeline. This role involves working with advanced allocation strategies to maximize utilization of large-scale computer fleets. The ideal candidate combines hands-on operation rigor with strong systems engineering skills and thrives on building resilient pipelines that keep pace with cutting-edge AI models. This role does not require 24/7 hour on-call rotations.
Responsibilities Deploy AI inference replicas and cluster software across multiple datacenters Operate across heterogeneous datacenter environments undergoing rapid 10x growth Maximize capacity allocation and optimize replica placement using constraint-solver algorithms Operate bare-metal inference infrastructure while supporting transition to K8S-based platform Develop and extend telemetry, observability and alerting solutions to ensure deployment reliability at scale Develop and extend a fully automated deployment pipeline to support fast software updates and capacity reallocation at scale Translate technical and customer needs into actionable requirements for the Dev Infra, Cluster, Platform and Core teams Stay up to date with the latest advancements in AI compute infrastructure and related technologies. Skills And Requirements 2-5 years of experience in operating on-prem compute infrastructure (ideally in Machine Learning or High-Performance Compute) or id developing and managing complex AWS plane infrastructure for hybrid deployments Strong proficiency in Python for automation, orchestration, and deployment tooling Solid understanding of Linux-based systems and command-line tools Extensive knowledge of Docker containers and container orchestration platforms like K8S Familiarity with spine-leaf (Clos) networking architecture Proficiency with telemetry and observability stacks such as Prometheus, InfluxDB and Grafana Strong ownership mindset and accountability for complex deployments Ability to work effectively in a fast-paced environment. Location SF Bay Area. Toronto Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
Apply
October 15, 2025
Software Engineer, Gen AI Platform
Abridge
201-500
USD
234000
162000
-
234000
United States
Full-time
Remote
false
About AbridgeAbridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients.Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems.We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh. The RoleOur generative AI-powered products are making a huge impact in the Healthcare industry. As an AI Platform Engineer, you will collaborate closely with a cross-functional team of researchers, clinical scientists, and product engineers. You will design and build the runtime, orchestration engine, and evaluation platform for agentic orchestration and LLM-driven workflows.What You’ll DoDesign and build GenAI systems that turn LLMs into composable, dependable tools—leveraging retrieval, tool use, agentic reasoning, and structured outputs.Design and implement a highly reliable and scalable agent runtime: orchestration, shared state and memory, tool-calling interfaces, and scheduling for cost, latency, and quality.Build secure, sandboxed execution for agent actions and code; optimize cold start, isolation, and observability.Ship unified interfaces for multiple model sizes and providers; integrate with open tool ecosystems such as MCP-style connectors for data and actions.Develop an evaluation platform for online and offline assessments, A/B tests, safety checks, and regression gates that improve agent reliability over time.Partner with Research to deliver new agent capabilities end to end—from prototype to production.What You’ll BringExperience building agent applications with tool-calling, context engineering, or open connector integrations.Fluency with LLM APIs, prompting strategies, and orchestration patterns (e.g., LangChain, LlamaIndex, or custom pipelines).Experience with retrieval systems (e.g., semantic and lexical retrieval, vector DBs, efficient kNN), function calling, tool use, or agentic workflows.Strong coding skills in one or more of: Python, Java, Go. Comfortable with service design, APIs, and data models for high-throughput systems.Working knowledge of containers and kubernetes concepts. Familiarity with metrics, tracing, on-call rotations, and incident response practices.Self-motivated with a willingness to take ownership.Strong communication skills and ability to work collaboratively in a team environment.Bonus Points If…Prior work on agent orchestration pipelines: task routing, planning, memory graphs, vector search, or browser automation.Experience with evaluations, preference optimization, or RL to improve LLM reliability.Why Work at Abridge?At Abridge, we’re transforming healthcare delivery experiences with generative AI, enabling clinicians and patients to connect in deeper, more meaningful ways. Our mission is clear: to power deeper understanding in healthcare. We’re driving real, lasting change, with millions of medical conversations processed each month.Joining Abridge means stepping into a fast-paced, high-growth startup where your contributions truly make a difference. Our culture requires extreme ownership—every employee has the ability to (and is expected to) make an impact on our customers and our business.Beyond individual impact, you will have the opportunity to work alongside a team of curious, high-achieving people in a supportive environment where success is shared, growth is constant, and feedback fuels progress. At Abridge, it’s not just what we do—it’s how we do it. Every decision is rooted in empathy, always prioritizing the needs of clinicians and patients.We’re committed to supporting your growth, both professionally and personally. Whether it's flexible work hours, an inclusive culture, or ongoing learning opportunities, we are here to help you thrive and do the best work of your life.If you are ready to make a meaningful impact alongside passionate people who care deeply about what they do, Abridge is the place for you.
How we take care of Abridgers:Generous Time Off: 13 paid holidays, flexible PTO for salaried employees, and accrued time off for hourly employees.Comprehensive Health Plans: Medical, Dental, and Vision plans for all full-time employees. Abridge covers 100% of the premium for you and 75% for dependents. If you choose a HSA-eligible plan, Abridge also makes monthly contributions to your HSA. Paid Parental Leave: 16 weeks paid parental leave for all full-time employees.401k and Matching: Contribution matching to help invest in your future.Pre-tax Benefits: Access to Flexible Spending Accounts (FSA) and Commuter Benefits.Learning and Development Budget: Yearly contributions for coaching, courses, workshops, conferences, and more.Sabbatical Leave: 30 days of paid Sabbatical Leave after 5 years of employment.Compensation and Equity: Competitive compensation and equity grants for full time employees.... and much more!Equal Opportunity EmployerAbridge is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability.Staying safe - Protect yourself from recruitment fraudWe are aware of individuals and entities fraudulently representing themselves as Abridge recruiters and/or hiring managers. Abridge will never ask for financial information or payment, or for personal information such as bank account number or social security number during the job application or interview process. Any emails from the Abridge recruiting team will come from an @abridge.com email address. You can learn more about how to protect yourself from these types of fraud by referring to this article. Please exercise caution and cease communications if something feels suspicious about your interactions.
Software Engineer
Software Engineering
Machine Learning Engineer
Data Science & Analytics
Apply
October 7, 2025
Forward Deployed Engineer
FurtherAI
11-50
USD
200000
140000
-
200000
United States
Full-time
Remote
false
Role DetailsLocation: San Francisco, CA. This role is based out of our San Francisco HQ and is not eligible for full-time remote work.About UsAt FurtherAI, we’re building the next generation of AI agents for the insurance industry - a trillion-dollar market ready for transformation.We’ve raised a $5M seed round from top investors (YC, Nexus, South Park Commons, Converge) and have grown 10x in revenue this year alone. Our customers include some of the largest names in insurance (we recently closed a top 5 insurance company in the world), and our team combines deep AI expertise with proven company-building experience.Now, we’re looking for an exceptional Forward Deployed Engineer to join our early team and help shape both our product and culture.Why Join UsRocketship Growth: Post-PMF, with revenue growing at an exceptional pace.Elite Team: Founding team includes ex-Apple AI Research, 4 ex-YC founders, and 6 ex-founders. Engineers have prior experience at Apple, Microsoft, Google, and Amazon.Technical Depth: Work alongside a staff research scientist and a team of 7 engineers solving cutting-edge problems in AI and scalable backend systems.Massive Market Impact: Insurance is the backbone of global commerce - our work reshapes how this industry operates.Founder’s Mindset: Perfect fit if you want to own big pieces of the product and possibly start your own company in the future.What You’ll DoBuild enterprise-grade AI agents from the ground upWork closely with our CTO, Sashank, who brings experience from building speech and language models at Apple’s AI/ML orgTalk to customers, understand their flows, and translate those into robust agentic solutionsImplement cutting-edge AI capabilities in productionWork in-person from our San Francisco office (5 day week)What We’re Looking For3+ years of consumer facing engineering experience.Strong experience with backend systems (Python preferred).Clear communicator - both in person and in writing.Above all: drive, grit, and ownership. If you excel here, other requirements are secondaryAt FurtherAI, we set a high bar. We’re not looking for someone who just wants a job—we’re looking for someone who wants to build something transformative. If you thrive in environments where the pace is fast, expectations are high, and the rewards are outsized, you’ll love it here.
Software Engineer
Software Engineering
Machine Learning Engineer
Data Science & Analytics
Apply
October 7, 2025
Deployment Engineer, AI Inference
Cerebras Systems
501-1000
0
0
-
0
United States
Canada
Full-time
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About Us Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In 2024, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking a highly skilled and experienced Deployment Engineer to build and operate our cutting-edge inference clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of the new software versions and AI replica updates, along the capacity reallocations across our custom-built, high-capacity datacenters.
Beyond operations, you’ll drive improvements to our telemetry, observability and the fully automated pipeline. This role involves working with advanced allocation strategies to maximize utilization of large-scale computer fleets. The ideal candidate combines hands-on operation rigor with strong systems engineering skills and thrives on building resilient pipelines that keep pace with cutting-edge AI models. This role does not require 24/7 hour on-call rotations.
Responsibilities Deploy AI inference replicas and cluster software across multiple datacenters Operate across heterogeneous datacenter environments undergoing rapid 10x growth Maximize capacity allocation and optimize replica placement using constraint-solver algorithms Operate bare-metal inference infrastructure while supporting transition to K8S-based platform Develop and extend telemetry, observability and alerting solutions to ensure deployment reliability at scale Develop and extend a fully automated deployment pipeline to support fast software updates and capacity reallocation at scale Translate technical and customer needs into actionable requirements for the Dev Infra, Cluster, Platform and Core teams Stay up to date with the latest advancements in AI compute infrastructure and related technologies. Skills And Requirements 5-7 years of experience in operating on-prem compute infrastructure (ideally in Machine Learning or High-Performance Compute) or id developing and managing complex AWS plane infrastructure for hybrid deployments Strong proficiency in Python for automation, orchestration, and deployment tooling Solid understanding of Linux-based systems and command-line tools Extensive knowledge of Docker containers and container orchestration platforms like K8S Familiarity with spine-leaf (Clos) networking architecture Proficiency with telemetry and observability stacks such as Prometheus, InfluxDB and Grafana Strong ownership mindset and accountability for complex deployments Ability to work effectively in a fast-paced environment. Location SF Bay Area. Toronto, Canada. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
Apply
October 4, 2025
AI Engineer - FDE (Forward Deployed Engineer)
Databricks
5000+
0
0
-
0
France
Full-time
Remote
true
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Req ID: CSQ326R220 Recruiter: Dina Hussain Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. We welcome remote applicants located near our offices. The preferred locations (in priority order) are London (UK), Madrid (Spain), Paris (France), and Amsterdam (NL). Reporting to: Senior Manager - AI FDE, EMEA The impact you will have: Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for: Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets We require fluency in English and welcome candidates who also speak French, Spanish, Dutch, or German About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
Apply
September 5, 2025
AI Engineer - FDE (Forward Deployed Engineer)
Databricks
5000+
0
0
-
0
Spain
Full-time
Remote
true
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Req ID: CSQ326R220 Recruiter: Dina Hussain Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. We welcome remote applicants located near our offices. The preferred locations (in priority order) are London (UK), Madrid (Spain), Paris (France), and Amsterdam (NL). Reporting to: Senior Manager - AI FDE, EMEA The impact you will have: Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for: Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets We require fluency in English and welcome candidates who also speak French, Spanish, Dutch, or German About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Data Scientist
Data Science & Analytics
Apply
September 5, 2025
AI Engineer & Researcher - Coding Agents
X AI
5000+
USD
0
0
-
0
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Team The Grok Code Team at xAI focuses on pushing the boundaries of Software Engineering where AI is in the front and centre of the full developer loop. About the Role In this role you will: You will build the best in class Coding Agents focussing on building the best in class Software Engineering stack which is AI first and AI native. Work across the LLM stack (reasoning RL, product building, dataset generation, evals) to deliver the best product experience for users. Work closely with the research team and help drive model development and feedback loops to optimise both the model behavior and user experience and satisfaction Exceptional candidates may have: Experience in developer tooling with AI first mindset for the changing world. Exceptional engineering skills to iterate quickly on the data processing and product features Strong understanding of large language model and how to best leverage them for AI assisted software development Deep knowledge and “taste” in software developer tooling Location We hire engineers in Palo Alto. Our team usually works from the office 5 days a week but allow work-from-home days when required. Candidates are expected to be located near Palo Alto or open to relocation. Interview Process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews: Coding assessment in a language of your choice. 2x post-training technical sessions: These sessions will be testing your ability to formulate, design and solve concrete problems in training data for post-training. Meet the Team: Present your past exceptional work and your vision with xAI to a small audience. Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet. Annual Salary Range $180,000 - $440,000 USDxAI is an equal opportunity employer. California Consumer Privacy Act (CCPA) Notice
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
July 26, 2025
Forward Deployed Engineer
Clarion
11-50
USD
225000
150000
-
225000
United States
Full-time
Remote
false
About ClarionAt Clarion, we're rebuilding how healthcare communicates in the age of AI. Today, clinics miss 30-40% of patient calls while staff drowns in administrative tasks. We believe AI agents should handle these workflows—scheduling, billing, prescription refills—so healthcare teams can focus on actual patient care.We're building the communication infrastructure that modern healthcare desperately needs. Our AI agents don't just answer calls; they complete entire workflows end-to-end, giving providers back their time and ensuring patients never go unheard. We've already handled hundreds of thousands of patient interactions across virtual care companies, health systems, and a $5B health insurance company.Founded by a Stanford/Harvard-trained physician who built Two Chairs and Ophelia, and an ex-Amazon Alexa engineer who led AI/ML at Salesforce, we uniquely understand both the clinical and technical challenges of transforming healthcare communication.We've raised $5.4M from Accel, Y Combinator, Sequoia (scout), and healthcare founders from Ophelia, Medallion, and Counsel Health. We're an in-person team based in New York, moving fast to ensure no patient call goes unanswered.Why Join Us?Early-stage impact with proven traction: Dozens of paying customers, rapidly growing revenue, and strong market signals—all in the high-impact environment of an early-stage startup.Tackling a massive healthcare challenge: Address a critical problem in communication that affects millions of patients and providers daily, using technology that truly transforms the patient experience.Technical founding team: Collaborate directly with experienced founders who understand healthcare operations and technical development, enabling faster decisions and better outcomes.Cutting-edge tech frontier: Harness the latest generative AI models—LLMs, voice synthesis, and real-time transcription—to build conversational agents that push healthcare innovation beyond current limits.In-person collaboration advantage: We've prioritized in-person work from day one for faster progress, stronger team bonds, and a cohesive culture.ResponsibilitiesEmbed with customer teams to deploy and optimize AI agents in live healthcare environments.Collaborate with engineering and product teams to customize solutions based on client feedback.Troubleshoot and iterate on healthcare integrations to ensure seamless performance.Drive customer success through onsite training, support, and relationship management.Contribute to product roadmap by identifying real-world challenges faced by customersRequirementsTechnical excellence with customer focus: 3+ years of engineering experience, with a proven track record of shipping production systems while working directly with customers.Healthcare systems experience: Familiarity with healthcare workflows, EHR integrations, or regulated environments (HIPAA compliance is a plus).Full-stack engineering skills: Proficiency in our tech stack (Node.js, TypeScript, React, PostgreSQL), with the ability to build reliable integrations and debug complex systems.Travel flexibility: Willingness to travel to customer sites 15-25% of the time for onsite deployments and relationship building.High agency and ownership: Comfortable in ambiguous situations, making independent decisions, and taking full responsibility for customer success.Interesting Technical ChallengesHealthcare-specific AI tuning: Configure voice agents to handle medical terminology, clinical context, and patient communication with empathy and accuracy.Complex workflow orchestration: Map and automate multi-step processes spanning scheduling, insurance verification, prescription management, and clinical documentation.Real-time system reliability: Maintain sub-second response times and 99.9% uptime for mission-critical healthcare communications.Secure, compliant integrations: Build robust connections with legacy systems while ensuring HIPAA compliance and protecting sensitive patient data.Scale personalization: Deploy AI agents that adapt to each organization's unique protocols, terminology, and style while maintaining consistent quality.What we offerDirect mentorship: Work closely with our founding team and experienced engineering leaders who will invest in your professional developmentMeaningful equity: Early employee stock options with significant ownership potentialComprehensive benefits: 100% covered healthcare, flexible time off, commuter benefits, daily team lunchesTeam culture: Quarterly retreats and monthly team events that build real connections in our close-knit NYC teamImpact at scale: Your work directly affects healthcare access for millions—every provider you bring onboard helps hundreds of patients get the care they needInterview ProcessAt each stage, we decide within 24 hours and update you shortly after:First Chat (30 min) [Virtual]: Discuss your relevant experience and technical strengths.System Design Screen (1 hr) [Virtual]: Exercise focused on designing LLM-based applications.Applied Coding Exercise (1 hr) [Virtual]: Hands-on challenge for building and optimizing AI agents in healthcare.Onsite (Half-day) [In-Person]: Visit our New York office to meet the team, explore our platform, and solve technical problems collaboratively.If you're ready to transform how millions of patients connect with their healthcare providers, we'd love to hear from you.
No items found.
Apply
July 20, 2025
AI Engineer
Hiya
201-500
0
0
-
0
United Kingdom
Full-time
Remote
true
About UsAt Hiya, we’re revolutionizing voice communication to make it more secure and productive. Our mission is to protect against spam and fraud, connect businesses with customers, and secure global telecommunications.Since 2015, when we introduced the first mobile caller ID and spam-blocking apps, we’ve been at the forefront of voice intelligence innovation. In 2016, we partnered with Samsung and AT&T to launch Hiya Protect, the first network-based spam-blocking solution. In 2019, we introduced Hiya Connect, a branded call SaaS platform that helps businesses reach more customers by phone.Today, our Voice Intelligence Platform powers over 500 million users worldwide. Leveraging adaptive AI and audio intelligence, it delivers smarter, safer, and more productive voice interactions across networks, apps, and devices. Our network & solution partners have grown to include British Telecom, EE, Virgin Media O2, Ericsson, Rogers, Bell Canada, MasMovil, Telenor, HP, KPMG, and more.About the PositionAI Engineers are responsible for developing and integrating AI solutions into Hiya’s products, focusing on rapid iteration, prompt engineering, and practical application. You'll fine-tune and optimize foundation models, craft sophisticated multi-agent systems, and invent novel solutions to power the next generation of voice intelligence.What You’ll DoIntegrate AI solutions into existing products and workflowsCollaborate with cross-functional teams to understand business requirements and translate them into technical solutionsConduct model evaluations, prompt engineering, and fine-tuning of large language models (LLMs)Implement and manage AI orchestration, including agent-based systemsParticipate in the design and implementation of AI-powered applications and interfacesHelp shape the technical direction and best practices for LLM application developmentStay at the forefront of AI research and incorporate state-of-the-art techniquesWhat You’ll Need to SucceedProficiency in programming languages such as Python, JavaScript, or TypeScriptExperience working with foundational model APIs and pre-trained open source modelsStrong understanding of machine learning workflows, including model evaluations and LLM fine-tuningFamiliarity with AI orchestration and agent-based systems and best practices (LangChain, AutoGen, n8n)Excellent problem-solving skills and the ability to work independently and collaboratively.Strong communication skills and the ability to translate technical concepts to non-technical stakeholdersThe person in this role must embody Hiya’s key values of Serving our customers, Doing rather than observing, Improving ourselves and our business, Owning and holding ourselves accountable for success, and Leading by showing up with a point of view, engaging in open discussion, listening respectfully to others opinions and committing to decisions. You will have a fast start if you have experience:Experience with cloud platforms such as AWS, Google Cloud, or AzureKnowledge of Kubernetes and containerization technologiesExperience with data science and ML engineeringFamiliarity with retrieval-augmented generation (RAG)The requirements listed in the job descriptions are guidelines. You don’t have to satisfy every requirement or meet every qualification listed. If your skills are transferable we would still love to hear from you.More DetailsStart Date: ImmediatelyStatus: Full-time Type: HybridLocation: London, UKTravel Requirements: Department: EngineeringReports to: VP of Engineering Benefits25 holiday plus bank holidaysOpt in salary sacrifice pension scheme (company full 4% of basic salary contribution)Paid parental leave Private medical insurance through Vitality (including dental & vision)Employer-paid life insurance 2x base salaryDonation Matching for a charity of your choice (up to $1,000/ year)WFH equipment stipend $1,000/year in Professional Development fundsLunch provided on in- office daysThis position is based in London, UK. Office post code: W1F 8WEWe are building a team with a variety of perspectives, identities, and professional experiences. We evaluate great candidates through a business lens and we strongly believe that diversity and unique perspectives make our company stronger, more dynamic, and a great place to build a career.Our team has won various awards over the last 4 years from Built-in Seattle and Seattle Business Week to #86 on Deloitte Technology Fast 500 and Forbes #1 Startup Employer. Here at Hiya, we are a people-centric company focused on helping each and every one of our employees grow both personally and professionally. We feel that creating a team culture of support and empowerment to challenge the status quo results in an energized and passionate team that is continuously challenged and passionate about the work they are doing. You'll love working here if you are looking for an innovative challenge that is disrupting an industry. Come join us!
AI Engineer
Software Engineering
Apply
June 11, 2025
AI Engineer
Videcode
1-10
-
United States
Full-time
Remote
false
ABOUT VIBECODEVibe Code is a high-growth U.S. startup reimagining how everyday people create and communicate — not through videos or images, but through custom software. We believe personalized apps are the next major creative medium, allowing people to share a new type of experience. To fully highlight this, we're building a mobile-first platform where anyone can describe an idea and instantly turn it into a working product/experience.We're not just targeting developers — we're empowering non-technical creators, entrepreneurs, and dreamers to build software the same way they might create a TikTok or design a Canva post.We’ve raised close to $10M in venture funding as a seed stage company, putting is in the Top 0.1% of startup backed by some of the most iconic names in tech:💥 776 (Alexis Ohanian)🚀 Long Journey (Cyan Banister, Arille Zuckerberg)🔮 Neo (Ali Partovi, Suzanne Xie)Logan Kilpatrick, Head @ Google DeepmindAnd many other amazing angels, founders, and investors...What we offer:💸 $150K–$400K USD base salary, depending on experience📈 Equity in a breakout company redefining software creation🌎 Full H1B/O1 visa sponsorship and relocation support👉 Note: This is an in-person role based in San Francisco. We believe the best products are built shoulder-to-shoulder and at high speed.
ABOUT THE ROLE
Want to be at the forefront of making easy-to-use consumer AI agents? This is the perfect role for you.You'll be testing, iterating, deploying, designing, and architecting our AI coding agent. This could involve writing evals, designing tools, modifying agentic architecture, fine-tuning models, developing models, etc.
AI Engineer
Software Engineering
Apply
June 7, 2025
Senior AI Engineer
Air Ops
1-10
USD
250000
200000
-
250000
United States
Full-time
Remote
true
About AirOpsToday thousands of leading brands and agencies use AirOps to win the battle for attention with content that both humans and agents love.We’re building the platform and profession that will empower a million marketers to become modern leaders — not spectators — as AI reshapes how brands reach their audiences.We’re backed by awesome investors, including Unusual Ventures, Wing VC, Founder Collective, XFund, Village Global, and Alt Capital, and we’re building a world-class team with in-person hubs in San Francisco, New York, and Montevideo, Uruguay.About the RoleWe're looking for a product-minded AI engineer to help us rapidly define and ship features that make all AirOps customers 10x content engineers.In this role, you'll drive the development of advanced AI systems, including our SEO Strategy Agent and Workflow Builder Copilot. You'll lead the creation of intelligent solutions that empower businesses to optimize their SEO strategy and streamline workflow creation with AI-powered assistance.Joining at a critical juncture, you'll collaborate closely with product management to shape our AI roadmap and technical architecture.
ResponsibilitiesDevelop and integrate LLM-powered features such as AI-assisted workflow automation, code generation, and content strategy.Optimize AI performance through prompt engineering, retrieval-augmented generation (RAG), and evaluation frameworks.Build and scale AI infrastructure, ensuring low-latency responses, caching, and cost-efficient model usage.Implement AI observability and safeguards, monitoring quality, security, and compliance.Collaborate with product and engineering teams to deliver intuitive, AI-driven user experiences.Stay ahead of AI advancements, continuously improving our AI-powered capabilities.Collaborate with early adopters to optimize model performance and usability.Qualifications3+ years of experience in machine learning engineering, AI/LLM integration, or applied NLPProven track record of building LLM-powered applicationsStrong experience with foundation models (GPT-4o, Claude, etc.) and advanced prompt engineeringExperience with embedding models (e.g., OpenAI Ada, Cohere, or local vector stores like pgvector, Weaviate, Pinecone)Deep understanding of retrieval-augmented generation (RAG) and contextual AI response optimizationFamiliarity with LangChain, or similar frameworks for orchestrating LLM-powered applicationsStrong programming skills in Python (experience with AI frameworks like Hugging Face, LangChain, or OpenAI SDK)Experience in evaluating LLM performance, running A/B tests, and implementing feedback loops for AI refinementSolid understanding of caching, rate limiting, and cost optimization strategies for AI workloadsAbility to work cross-functionally with engineers, product managers, and end users to develop impactful AI solutionsOur Guiding PrinciplesExtreme OwnershipQualityCuriosity and PlayMake Our Customers HeroesRespectful CandorBenefitsEquity in a fast-growing startupCompetitive benefits package tailored to your locationFlexible time off policyGenerous parental leaveA fun-loving and (just a bit) nerdy team that loves to move fast!
DevOps Engineer
Data Science & Analytics
Apply
June 4, 2025
Agent Architect
Hippocratic AI
201-500
0
0
-
0
United States
Full-time
Remote
false
About Us:Hippocratic AI is building safety-focused large language model (LLM) for the healthcare industry. Our team comprised of ex-researchers from Microsoft, Meta, Nvidia, Apple, Stanford, John Hopkins and HuggingFace are reinventing the next generation of foundation model training and alignment to create AI-powered conversational agents for real time patient-AI interactions.About the Role:We are looking for an Agent Architect to design, develop, and innovate the next generation of agentic systems that drive our healthcare-focused AI platform. This individual will serve as a central architect in shaping how our agents think, act, and interact across diverse clinical use cases.You will be responsible for selecting the right agentic paradigms—ranging from tool use, retrieval-augmented generation (RAG), and prompt engineering, to model training—and defining the underlying architecture for intelligent, safe, and responsive agents. You will work closely with our research, engineering, and evaluation teams to integrate cutting-edge techniques and continuously push the boundaries of agent capabilities.This role blends deep technical knowledge with strategic thinking and experimentation. It’s ideal for those who thrive at the intersection of LLM system design, product innovation, and applied AI research.Responsibilities:Architect and design new AI agents across a variety of clinical and operational use casesEvaluate and select the optimal agentic paradigm for each scenario (e.g., tools, engines, prompting, RAG, model training)Choose the appropriate models from our internal model library for specific tasksCollaborate with the research team to fine-tune models and optimize agent behaviorDefine and iterate on evaluation protocols in partnership with the evaluation teamDevelop new agent patterns and workflows to enable novel capabilities and interactionsRapidly incorporate state-of-the-art techniques from the latest scientific literature and open-source developmentsTrack and integrate capabilities from emerging foundational models to improve system performance and scopeRequired Qualifications:5+ years in a technical field such as software engineering, machine learning, data science, or AI product developmentDeep understanding of agentic system design and language model behaviorsProficiency with Python and modern ML toolingExperience building and evaluating non-deterministic AI/ML systemsStrong analytical and problem-solving skills, with an experimental mindsetFamiliarity with LLM paradigms such as prompting, RAG, fine-tuning, and tool usePreferred Skills:Experience designing agentic workflows or orchestration frameworks for LLMsBackground in AI research or exposure to cutting-edge developments in NLPAbility to translate complex technical ideas into scalable architecturesInterest in healthcare applications, patient interaction design, and safety-critical systemsExcellent written and verbal communication skills, with the ability to clearly document design decisions and evaluationsCandidate Background:We recognize that agent architecture is a novel and rapidly evolving field. Ideal candidates may come from varied backgrounds such as applied ML engineering, AI product design, prompt engineering, or even NLP-focused research roles. If you are excited about designing intelligent systems from the ground up and want to shape how LLMs interact with the world in a safe and impactful way, we encourage you to apply.Why Join Our Team:Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.For more information, visit www.HippocraticAI.com.We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description.
DevOps Engineer
Data Science & Analytics
Apply
May 29, 2025
No job found
There is no job in this category at the moment. Please try again later