⚠️ Sorry, this job is no longer available.

Find AI Work That Works for You

Latest roles in AI and machine learning, reviewed by real humans for quality and clarity.

Edit filters

New AI Opportunities

Showing 6179  of 79 jobs
Tag
Cerebras Systems.jpg

Python / PyTorch Developer — Frontend Inference Compiler – Dubai

Cerebras Systems
AE.svg
United Arab Emirates
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.   Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About the Role: Would you like to participate in creating the fastest Generative Models inference in the world? Join the Cerebras Inference Team to participate in development of unique Software and Hardware combination that sports best inference characteristics in the market while running largest models available.  Cerebras wafer scale inference platform allows running Generative models with unprecedented speed thanks to unique hardware architecture that provides fastest access to local memory, ultra-fast interconnect and huge amount of available compute.  You will be part of the team that works with latest open and closed generative AI models to optimize for the Cerebras inference platform. Your responsibilities will include working on model representation, optimization and compilation stack to produce the best results on Cerebras current and future platforms.    Responsibilities:  Analysis of new models from generative AI field and understanding of impacts on compilation stack  Develop and maintain the frontend compiler infrastructure that ingests PyTorch models and produces an intermediate representation (IR).  Extend and optimize PyTorch FX / TorchScript / TorchDynamo-based tooling for graph capture, transformation, and analysis. Work with ML and compiler teams to ensure fidelity and performance parity with native PyTorch. Collaboration with other teams throughout feature implementation  Research on new methods for model optimization to improve Cerebras inference   Qualifications:  Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability  Strong Python programming skills and in-depth experience with PyTorch internals (e.g., TorchScript, FX, or Dynamo). Solid understanding of computational graphs, tensor operations, and model tracing. Experience building or extending compilers, interpreters, or ML graph optimization frameworks. Familiarity with C++ extensions, LLVM, MLIR, or other IR-based compiler infrastructures. Experience working with PyTorch and HuggingFace Transformers library Knowledge and experience working with Large Language Models (understanding Transformer architecture variations, generation cycle, etc.)  Knowledge of MLIR based compilation stack is a plus Preferred Qualifications Prior experience contributing to PyTorch, TensorFlow XLA, TVM, ONNX, or similar compiler stacks. Knowledge of hardware accelerators, quantization, or runtime scheduling. Experience with multi-target inference compilation (e.g., CPU, GPU, custom ASICs). Understanding of numerical precision trade-offs and operator lowering. Contributions to open-source ML compiler projects.    Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Hidden link
Lambda.jpg

GTM Manager - Public Cloud Utilization Strategy

Lambda AI
US.svg
United States
Full-time
Remote
false
Lambda, The Superintelligence Cloud, builds Gigawatt-scale AI Factories for Training and Inference. Lambda’s mission is to make compute as ubiquitous as electricity and give every person access to artificial intelligence. One person, one GPU. If you'd like to build the world's best deep learning cloud, join us.  *Note: This position requires presence in our San Francisco or San Jose office location 4 days per week; Lambda’s designated work from home day is currently Tuesday. What You’ll DoDevelop and own the end-to-end strategy for public cloud utilization, driving maximum ROI, revenue efficiency, and gross margin improvement.Lead execution of utilization initiatives by aligning cross-functional stakeholders across GTM, Product, Engineering, Marketing, Partnerships, and FP&A.Diagnose and address root causes of underutilization through targeted solutions - including pricing strategies, GTM motions, sales plays, partnerships, or product enhancements.Translate utilization strategies into actionable GTM plays and customer engagement plans to accelerate adoption and optimize cloud usage.Partner with senior leadership to prioritize initiatives, secure alignment, and ensure measurable business outcomes.Serve as the primary owner and voice of public cloud utilization strategy in executive forums, shaping business direction with data-driven insights.Prepare and deliver executive-level briefings on utilization performance, pipeline alignment, and financial impact.Collaborate with C-level leadership to ensure cloud utilization strategy aligns with company growth and operational goals.Build and maintain dashboards and reporting to provide visibility into public cloud utilization, availability, and revenue impact.Partner with capacity planning, cloud operations, and FP&A to analyze revenue, cost structures, margin opportunities, and forecast accuracy.Metrics owned - % Public cloud utilization and associated revenue growth, revenue per unit of cloud capacity consumed, Net Revenue Retention (NRR) tied to cloud services, customer base performance (active, growth, churn), gross margin impact of utilization initiatives, and utilization forecast accuracy (planned vs. actual).YouHave 7+ years of experience in GTM strategy, business operations, or cross-functional leadership roles within public cloud, SaaS, or infrastructure organizations.Possess a strong ability to influence senior executives and communicate actionable insights in board-level and C-suite settings.Demonstrate a proven track record of developing strategies, securing buy-in, and leading execution with measurable outcomes.Bring strong financial and analytical acumen, with the ability to link utilization metrics directly to growth, margin, and profitability.Have cross-functional leadership experience, collaborating effectively with GTM, Product, Engineering, and Finance teams.Are comfortable driving complex, high-impact programs across multiple stakeholder groups.(Preferred) Hold an MBA or equivalent experience in cloud business strategy and operations.(Plus) Bring product or program management experience in a cloud infrastructure context.Nice to HaveExperience in the machine learning or computer hardware industrySalary Range InformationThe annual salary range for this position has been set based on market data and other factors. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description. About LambdaFounded in 2012, ~400 employees (2025) and growing fastWe offer generous cash & equity compensationOur investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitabilityOur research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOGHealth, dental, and vision coverage for you and your dependentsWellness and Commuter stipends for select roles401k Plan with 2% company match (USA employees)Flexible Paid Time Off Plan that we all actually useA Final Note:You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.Equal Opportunity EmployerLambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.
Product Manager
Product & Operations
Hidden link
Databricks.jpg

Big Data Architect

Databricks
GE.svg
Germany
Full-time
Remote
false
CSQ426R239 We have 2 open positions based in our Germany offices.  As a Big Data Solutions Architect (Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks Data Intelligence Platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data. RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead. The impact you will have: You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to's and productionalizing customer use cases Work with engagement managers to scope variety of  professional services work with input from the customer Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications Consult on architecture and design; bootstrap or implement customer projects which leads to a customers' successful understanding, evaluation and adoption of Databricks. Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer's needs. Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues. What we look for: Proficient in data engineering, data platforms, and analytics with a strong track record of successful projects and in-depth knowledge of industry best practices Comfortable writing code in either Python or Scala Enterprise Data Warehousing experience (Teradata / Synapse/ Snowflake or SAP) Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals Familiarity with CI/CD for production deployments Working knowledge of MLOps  Design and deployment of performant end-to-end data architectures Experience with technical project delivery - managing scope and timelines. Documentation and white-boarding skills. Experience working with clients and managing conflicts. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Travel is required up to 10%, more at peak times. Databricks Certification About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.  Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Solutions Architect
Software Engineering
Data Engineer
Data Science & Analytics
DevOps Engineer
Data Science & Analytics
Hidden link
Cerebras Systems.jpg

Applied Data Center Design Engineer

Cerebras Systems
CA.svg
Canada
Full-time
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.   Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About The Role As an Applied Data Center Design Engineer, you’ll own the “last mile” of cluster architecture - transforming high-level design specifications into efficient, real-world deployment blueprints for servers, storage, networking, and cabling. You’ll be responsible for customizing data center and rack-level designs based on specific cluster requirements - adapting layouts, power, and connectivity to optimize performance, scalability, and reliability. When real-world constraints like space, power, or supply chain limitations arise, you’ll make smart trade-offs to deliver practical, deployable solutions. This role combines hands-on problem solving with automation and tooling - you’ll also help design and build the frameworks that make each new deployment iteration faster, smarter, and more consistent across sites. It’s a great opportunity for someone early in their career who enjoys working at the intersection of hardware, software, and operations, and wants to shape the foundation of large-scale compute infrastructure. Responsibilities Translate cluster and rack-level design specifications into deployable blueprints for servers, storage, networking, and cabling. Customize rack-level designs to meet unique cluster requirements, ensuring power, thermal, and network connectivity are optimized for each deployment. Collaborate with operations team to validate and adapt designs based on site-specific constraints (e.g., power, cooling, space, logistics). Identify and implement automation and tooling to streamline BOM generation and design validation. Participate in data center deployment reviews, ensuring alignment between design intent and implementation. Support issue triage and root cause analysis for deployment-related or physical integration problems. Skills & Qualifications Bachelor’s or Master’s degree in Computer Engineering, Electrical Engineering, Computer Science, or a related field — or equivalent practical experience. 1–3 years of experience in infrastructure engineering, data center design, or systems deployment, creating rack elevations, bill of materials (BOMs), and port/cable maps. Familiarity with servers, networking, and storage hardware. Basic proficiency in scripting or automation (e.g., Python, PowerShell, or Bash). Strong analytical and problem-solving skills with attention to detail. Excellent communication and teamwork skills across multiple engineering disciplines. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
Hidden link
Hippocratic AI.jpg

AI Engineer - Evaluations

Hippocratic AI
US.svg
United States
Full-time
Remote
false
About UsHippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.Why Join Our TeamInnovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.For more information, visit www.HippocraticAI.com.We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA, unless explicitly noted otherwise in the job description.About the RoleAs an AI Engineer – Evaluations at Hippocratic AI, you’ll define and build the systems that measure, validate, and improve the intelligence, safety, and empathy of our voice-based generative healthcare agents.Evaluation sits at the heart of our model improvement loop — it informs architecture choices, training priorities, and launch decisions for every patient-facing agent. You’ll design LLM-based auto-evaluators, agent harnesses, and feedback pipelines that ensure each model interaction is clinically safe, contextually aware, and grounded in healthcare best practices.You’ll collaborate closely with research, product, and clinical teams, working across the stack — from backend data pipelines and evaluation frameworks to tooling that surfaces insights for model iteration. Your work will directly shape how our agents behave, accelerating both their reliability and their real-world impact.What You'll Do:Design and build evaluation frameworks and harnesses that measure the performance, safety, and trustworthiness of Hippocratic AI’s generative voice agents.Prototype and deploy LLM-based evaluators to assess reasoning quality, empathy, factual correctness, and adherence to clinical safety standards.Build feedback pipelines that connect evaluation signals directly to model improvement and retraining loops.Partner with AI researchers and product teams to turn qualitative gaps into clear, defensible, and reproducible metrics.Develop reusable systems and tooling that enable contributions from across the company, steadily raising the quality bar for model behavior and user experience.What You BringMust Have:3+ years of software or ML engineering experience with a track record of shipping production systems end-to-end.Proficiency in Python and experience building data pipelines, evaluation frameworks, or ML infrastructure.Familiarity with LLM evaluation techniques — including prompt testing, multi-agent workflows, and tool-using systems.Understanding of deep learning fundamentals and how offline datasets, evaluation data, and experiments drive model reliability.Excellent communication skills with the ability to partner effectively across engineering, research, and clinical domains.Passion for safety, quality, and real-world impact in AI-driven healthcare products.Nice-to-Have:Experience developing agent harnesses or simulation environments for model testing.Background in AI safety, healthcare QA, or human feedback evaluation (RLHF).Familiarity with reinforcement learning, retrieval-augmented evaluation, or long-context model testing.If you’re excited by the challenge of building trusted, production-grade evaluation systems that directly shape how AI behaves in the real world, we’d love to hear from you.Join Hippocratic AI and help define the standard for clinically safe, high-quality AI evaluation in healthcare.***Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything
Machine Learning Engineer
Data Science & Analytics
Hidden link
Magical.jpg

Senior Software Engineer, Agent Platform

Magical
CA.svg
Canada
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare—delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare—a $4 trillion industry buried in administrative complexity—where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:• Scaled from $1M to $4M ARR in the first 6 months of our new agentic platform• Accelerating growth with customers expanding into new workflows before renewal• 7-day proof-of-concepts that demonstrate real value fast• Self-healing automations with production-grade reliabilityUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs a Senior Backend Engineer on the Agent Platform team, you'll build the foundational systems that power our AI agents—from agent orchestration and state management to model integration and evaluation pipelines. This is platform engineering: building the infrastructure that enables our team to ship reliable, production-grade agentic automation.You'll work at the cutting edge of multi-agent systems, designing how agents collaborate to solve complex healthcare workflows. You'll own critical backend services end-to-end and build the verification systems and evaluations that guarantee our agents do the right thing every time.In this role, you will• Design and build core platform systems for agent orchestration, memory, and state management• Develop robust infrastructure for integrating and evaluating the latest AI models at scale• Build APIs and abstractions that enable teams to quickly ship new agent capabilities• Own critical backend services from architecture through deployment and monitoring• Work directly with the founding team to shape the architecture of our agentic platformYour background looks something like this• Have 4+ years of backend engineering experience, with strong proficiency in TypeScript/Node• Deep understanding of distributed systems, databases, and asynchronous architectures• Strong bias towards action and embody "show > tell"—you ship systems and iterate quickly• High degree of agency: you effectively prioritize, unblock yourself, and drive projects forward without much outside input• Value taking ownership and responsibility for your systems and infrastructure• Studied Computer Science (or related field), or dropped out to build stuff• Located in SF or willing to relocateEven better• Prior experience building platform or infrastructure systems at scale• Experience integrating LLMs or AI models into production backend systems• Background in real-time systems, event-driven architectures, or workflow engines• Track record of building foundational systems that enabled teams to move fasterWe're building the best self-serve agentic automation platform for the healthcare industry—and we're just getting started. Come join us.
Software Engineer
Software Engineering
Hidden link
Cohere Health.jpg

Senior/Staff Frontend Engineer

Cohere
CA.svg
Canada
Full-time
Remote
true
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!About North:North is Cohere's cutting-edge AI workspace platform, designed to revolutionize the way enterprises utilize AI. It offers a secure and customizable environment, allowing companies to deploy AI while maintaining control over sensitive data. North integrates seamlessly with existing workflows, providing a trusted platform that connects AI agents with workplace tools and applications.As a Senior/Staff Frontend, you will: Build and ship features for North, our AI workspace platformDevelop autonomous agents that talk to sensitive enterprise dataWrite and ship minimal code that runs in low-resource environments, and has highly stringent deployment mechanismsAs security and privacy are paramount, you will sometimes need to re-invent the wheel, and won’t be able to use the most popular libraries or toolingCollaborate with researchers to productionize state-of-the-art models and techniquesYou may be a good fit if:Have shipped (lots of) frontend code (React, Typescript) in productionYou excel in fast-paced environments and can execute while priorities and objectives are a moving targetYou’ve worked in both large enterprises and startupsYou have strong coding abilities and are comfortable working across the stack.You’re able to read and understand, and even fix issues outside of the main code baseYou have built and deployed extremely performant client-side or server-side RAG/agentic applications to millions of users.Bonus: You love art and have an eye for design and a creative mindset - A plus, if you have a portfolio, GitHub or other showcases of your projectsIf some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
Software Engineer
Software Engineering
Hidden link
Dataiku.jpg

Principal Competitive Analyst (UK, FR, DE, NL)

Dataiku
FR.svg
France
GE.svg
Germany
NL.svg
Netherlands
Remote
true
Dataiku is The Universal AI Platform™, giving organizations control over their AI talent, processes, and technologies to unleash the creation of analytics, models, and agents. Providing no-, low-, and full-code capabilities, Dataiku meets teams where they are today, allowing them to begin building with AI using their existing skills and knowledge.Are you passionate about the AI market and highly skilled in competitive analysis?  Dataiku is seeking a strategic, analytical, and action-focused Principal Competitive Analyst to join our Competitive Intelligence team within Corporate Strategy. In this critical role, you’ll help us develop a deep understanding of the competition, drive market differentiation, shape strategic battle plans, and create enterprise-wide impact.  Join us in shaping the future of enterprise AI! This is your opportunity to play a key role in one of the most transformative industries of the last 20+ years. Building on our established CI foundation, you'll have strategic cross-functional impact in the rapidly evolving Agentic AI market.   How you’ll make an impact:  Analyze competitive moves and market dynamics, transforming critical insights into actionable strategies that inform market scenarios. Present strategic analysis and actionable recommendations to Senior leadership, Technology, Marketing, and Revenue teams. Support our go-to-market teams with competitive strategies that help us win more at scale. Directly support deals where needed. Relentlessly collect field intelligence, including through active monitoring of the market, participation in conferences, developing an extensive network of field connections, and our win-loss program. Work closely with Sales Enablement to scale the distribution of effective competitive training resources. Ensure all CI content meets high-quality standards. Independently lead research projects supporting our corporate strategy function.   What you'll need to be successful:  Bachelor's degree in a technology-related field or equivalent experience   7+ years in competitive intelligence, management consulting, product marketing, or related strategic roles, with 5+ years focused specifically on competitive analysis Excellent analytical skills and a solid understanding of the technology markets, with the ability to detect, analyse, and proactively respond to competitive threats that impact sales performance Proven track record of supporting sales success through competitive assets, deal support, and sales team coaching Excellent written and verbal communication skills, with the ability to present effectively to diverse audiences, including account executives, sales engineers, technical product managers, and executives. Strong organizational skills and ability to manage multiple priorities in a dynamic environment  Self-driven with a strong bias for action and a drive to enhance market positioning Willingness to travel up to 20% for conferences, sales events, and meetings in New York and Paris offices   What will make you stand out: Passion for Competitive Intelligence: Genuine enthusiasm for CI work, market dynamics, and using competitive insights to drive business impact. Deep experience in the data science and ML platform ecosystem and/or modern data stack technologies, plus knowledge of the emerging agentic AI market and competitive dynamics across the data-to-insights value chain. Tenacity and drive: a relentless commitment to help us differentiate in the market and win more deals. Strong team player: collaborates effectively with international and cross-functional teams, contributing to a positive and productive team environment.  Intellectual curiosity and critical mindset: willingness to continuously learn about market trends, Dataiku, and competitor platforms, and ability to dig under the product marketing surface. Flexibility and agility: thrive in a fast-paced Enterprise AI market, quickly adjusting strategies, and creating new competitive playbooks as needed. Innovative problem solver: bring creative solutions to complex challenges.   What are you waiting for! At Dataiku, you'll be part of a journey to shape the ever-evolving world of AI. We're not just building a product; we're crafting the future of AI. If you're ready to make a significant impact in a company that values innovation, collaboration, and your personal growth, we can't wait to welcome you to Dataiku! And if you’d like to learn even more about working here, you can visit our Dataiku LinkedIn page.   Our practices are rooted in the idea that everyone should be treated with dignity, decency and fairness. Dataiku also believes that a diverse identity is a source of strength and allows us to optimize across the many dimensions that are needed for our success. Therefore, we are proud to be an equal opportunity employer. All employment practices are based on business needs, without regard to race, ethnicity, gender identity or expression, sexual orientation, religion, age, neurodiversity, disability status, citizenship, veteran status or any other aspect which makes an individual unique or protected by laws and regulations in the locations where we operate. This applies to all policies and procedures related to recruitment and hiring, compensation, benefits, performance, promotion and termination and all other conditions and terms of employment. If you need assistance or an accommodation, please contact us at: reasonable-accommodations@dataiku.com     Protect yourself from fraudulent recruitment activity Dataiku will never ask you for payment of any type during the interview or hiring process. Other than our video-conference application, Zoom, we will also never ask you to make purchases or download third-party applications during the process. If you experience something out of the ordinary or suspect fraudulent activity, please review our page on identifying and reporting fraudulent activity here.
Product Manager
Product & Operations
Hidden link
Cerebras Systems.jpg

Network Engineer - Cluster Architecture

Cerebras Systems
US.svg
United States
CA.svg
Canada
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.   Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About The Role As a Network Engineer on the Cluster Architecture Team, you will work closely with the vendors, internal networking teams and industry peers to develop best-in-class interconnect architecture of the current and future generations of the Cerebras AI clusters. You will be responsible for developing proof-of-concept of new network designs and features enabling resilient and reliable network for AI workloads. The role will require cross-functional collaboration and interaction with diverse hardware components (e.g., network devices and the Wafer-Scale Engine) as well as software at several layers of the stack, from host-side networking to cluster-level coordination. The role also requires understanding of network monitoring systems and network debugging methodologies. Responsibilities Design AI/ML and HPC Clusters with a focus on the network technology. Identify and address performance or efficiency bottlenecks, ensuring high resource utilization, low latency, and high throughput communication. Stay current on emerging networking technologies: evaluate new hardware, fabrics, and protocols to improve cluster performance, scalability, and cost efficiency. Drive technical projects involving multiple teams, various software and hardware components coming together to realize advanced networking technologies. Bring effective communication skills. Collaborate with vendors and industry peers to drive network hardware and feature roadmap. Pre-deployment readiness & port mapping: build/validate rack/row and patch-panel port maps, cabling plans, if required in rare cases. Bring-up & rare deployment debugging: assist with lab/staging validation, packet captures, link level diagnostics, and synthetic traffic tests. Skills & Qualifications Ph.D. in Computer Science or Electrical Engineering + 10 years industry experience or Master’s in CS or EE + 15 years industry experience. 5+ Years of experience in large scale network designs in WAN or Datacenter. Extensive experience debugging networking issues in large distributed systems environment with multiple networking platforms and protocols. Experience of managing and leading multi-phase and multi-team projects. Networking platforms like Juniper, Arista, Cisco, open-box architectures (SONiC, FBOSS). Networking protocols like RoCE, BGP, DCQCN, PFC, streaming telemetry. Familiarity with automation languages like Python or Go. Familiarity with network visibility and management systems. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
Hidden link
Appier.jpg

Senior Machine Learning Scientist, Advertising Incrementality

Appier
TW.svg
Taiwan
Full-time
Remote
false
About Appier  Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information.   The Impact You’ll Make at Appier Appier is seeking a Senior Machine Learning Scientist to join our Advertising Cloud Optimization team, which leads the development of core machine learning algorithms driving campaign efficiency and advertiser ROI. Our programmatic advertising platform operates at a massive scale, handling over multi millions queries per second (QPS), all powered by our proprietary deep learning models for bidding, pricing, and personalized content delivery. In this role, you'll measure ads incrementality on different type of traffic, creative, users, and also improve campaign efficiency on combination having good incrementality by machine learning model and AI automation.   What You’ll Work On Leverage scientific methods to improve ads measurement. Drive online experiments to continuously improve ads effectiveness. Own the project independently. Partner with product stakeholder, backend and frontend to provide measurement solution for our customers.   What We’re Looking For Master’s or PhD’s degree in Computer Science, Machine Learning, Statistics, Econometrics, or related field. 3+ years of industry experience on causal inference, incrementality, marketing mix modeling or other measurement solutions on digital Ads related fields, and understand the assumption and difference across methods. Proficiency in Python and SQL experience with modern ML frameworks (PyTorch, TensorFlow, etc.). Strong ownership and collaboration skills.   #LI-AK1
Machine Learning Engineer
Data Science & Analytics
Data Scientist
Data Science & Analytics
Hidden link
Tenstorrent.jpg

Deputy Business Development Lead, GCC

Tenstorrent
AE.svg
United Arab Emirates
Remote
false
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.At Tenstorrent, we build computers for AI, and the developers shaping its future. Our high-performance RISC-V CPUs, modular chiplets, and scalable compute systems give developers full control at every layer of the stack, at any scale from a single-node experimentation to data center-scale deployment. We believe in an open future. Our architecture and software are designed to be edited, forked, and owned. Our team of engineers, dreamers, and first-principle thinkers is redefining how hardware and software converge to accelerate innovation. The Deputy BD Lead will support the amplification of this sovereign AI strategy by identifying opportunities, managing client relationships, and supporting go-to-market execution across the Gulf Cooperation Council (GCC) region. Reporting to the BD Lead for GCC region, this position will work closely with both regional leadership and global counterparts to build an Open Future.  This role is hybrid, based out of Dubai, UAE or Abu Dhabi, UAE. We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.   Who You Are Passionate about sovereign AI solutions and open source software. Commercially minded with strong business development and client engagement skills. Thrive in fast-paced, high-growth environments and can balance strategic thinking with hands-on execution. A confident communicator, able to engage and collaborate effectively with senior stakeholders, partners, and customers. Results-oriented, organized, and capable of managing multiple opportunities simultaneously.   What We Need Experience in the AI hardware or software market, with an understanding of ecosystem dynamics and emerging trends, including enterprise, government, and strategic partner landscapes. Strong relationships with GCC-based clients, system integrators (SIs), and independent software vendors (ISVs)  Proven track record in pipeline development, client relationship management, and closing commercial opportunities. Experience working with cross-border teams and managing multi-stakeholder initiatives. Fluency in English is required. Proficiency in Arabic is highly preferred.   What You Will Learn To throw out the old playbook and drive strategic growth in the rapidly evolving AI hardware and software ecosystem. Go-to-market strategies across the GCC from start to finish Negotiating strategic partnerships with enterprises, governments, and ecosystem partners. Building a regional network while collaborating with global BD, product, and technical teams. Deeper expertise in market analysis, sales operations, and deal execution.   Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology.  Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).   These requirements apply to persons located in the U.S. and all countries outside the U.S.  As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency.  If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Business Development
Marketing & Sales
Hidden link
Magical.jpg

Senior Software Engineer, Agent Platform

Magical
US.svg
United States
Full-time
Remote
false
About MagicalMagical is an agentic automation platform bringing state-of-the-art AI to healthcare—delivering AI agents that actually work in production.We're building "AI employees" that automate the repetitive, time-consuming workflows that slow teams down. Our focus is healthcare—a $4 trillion industry buried in administrative complexity—where we automate claims processing, prior authorizations, and eligibility checks, enabling providers to focus on patient care.Our TractionThe shift to agentic automation in healthcare is inevitable, and we're leading it:• Scaled from $1M to $4M ARR in the first 6 months of our new agentic platform• Accelerating growth with customers expanding into new workflows before renewal• 7-day proof-of-concepts that demonstrate real value fast• Self-healing automations with production-grade reliabilityUnlike many AI companies making bold promises, we ship reliable solutions that deliver measurable results. We're backed by Greylock, Coatue, and Lightspeed with $41M raised. Our founder, Harpaul Sambhi, is a second-time founder who successfully sold his first company to LinkedIn.About the RoleAs a Senior Backend Engineer on the Agent Platform team, you'll build the foundational systems that power our AI agents—from agent orchestration and state management to model integration and evaluation pipelines. This is platform engineering: building the infrastructure that enables our team to ship reliable, production-grade agentic automation.You'll work at the cutting edge of multi-agent systems, designing how agents collaborate to solve complex healthcare workflows. You'll own critical backend services end-to-end and build the verification systems and evaluations that guarantee our agents do the right thing every time.In this role, you will• Design and build core platform systems for agent orchestration, memory, and state management• Develop robust infrastructure for integrating and evaluating the latest AI models at scale• Build APIs and abstractions that enable teams to quickly ship new agent capabilities• Own critical backend services from architecture through deployment and monitoring• Work directly with the founding team to shape the architecture of our agentic platformYour background looks something like this• Have 4+ years of backend engineering experience, with strong proficiency in TypeScript/Node• Deep understanding of distributed systems, databases, and asynchronous architectures• Strong bias towards action and embody "show > tell"—you ship systems and iterate quickly• High degree of agency: you effectively prioritize, unblock yourself, and drive projects forward without much outside input• Value taking ownership and responsibility for your systems and infrastructure• Studied Computer Science (or related field), or dropped out to build stuff• Located in SF or willing to relocateEven better• Prior experience building platform or infrastructure systems at scale• Experience integrating LLMs or AI models into production backend systems• Background in real-time systems, event-driven architectures, or workflow engines• Track record of building foundational systems that enabled teams to move fasterWe're building the best self-serve agentic automation platform for the healthcare industry—and we're just getting started. Come join us.
Software Engineer
Software Engineering
Hidden link
Black Forest Labs.jpg

Member of Technical Staff - Image / Video Applications

Black Forest Labs
GE.svg
Germany
Full-time
Remote
false
At Black Forest Labs, we’re on a mission to advance the state of the art in generative deep learning for media, building powerful, creative, and open models that push what’s possible. Born from foundational research, we continuously create advanced infrastructure to transform ideas into images and videos. Our team pioneered Latent Diffusion, Stable Diffusion, and FLUX.1 – milestones in the evolution of generative AI. Today, these foundations power millions of creations worldwide, from individual artists to enterprise applications. We are looking for an Applied Researcher to develop precise control mechanisms for our image and video generation models, enabling users to direct outputs through practical controls like color palettes, transparency channels, and other production-ready features Role and Responsibilities Training large-scale Diffusion (transformer) models with advanced control mechanisms (hex color control, transparency generation, custom aspect ratios, etc.) Developing conditioning mechanisms for practical production requirements in image and video generation Rigorously ablating design choices for applied controls and communicating results & decisions with the broader team Reasoning about the speed and quality trade-offs of control architectures for real-world applications What we look for: Experience training large scale Diffusion models for image and video data Finetuning Diffusion models for image and video applications, such as, image and video upscalers, in and out painting models, etc. Deep understanding of how to effectively evaluating image and video generative models Strong proficiency in PyTorch, transformer models and other NN architectures. Deep understanding of training techniques such as FSDP, low precision training, and model parallelism Nice to have: Experience with writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors Profiling, debugging, and optimizing single and multi-GPU operations using tools such as Nsight or stack trace viewers
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Hidden link
Figure.jpg

Operations Associate, Helix

Figure AI
US.svg
United States
Full-time
Remote
false
Figure is an AI robotics company developing autonomous general-purpose humanoid robots. The goal of the company is to ship humanoid robots with human level intelligence. Its robots are engineered to perform a variety of tasks in the home and commercial markets. Figure is headquartered in San Jose, CA. We are looking for an Home Operations Associate for the Helix Fleet Ops team. The team is responsible for orchestrating Figure’s Home-use case massive collection operations (i.e. home data collection and deployment). Responsibilities: You will help drive successful in-home robot data collection operations by supporting project prioritization and execution under the guidance of the Helix Fleet Operations Manager. Examples include: Supporting the execution of offsite deployments — i.e., all activities related to the successful deployment of humanoids and collection operations in residential environments (logistics, homeowner interface, etc.) Leading cross-functional planning and coordination across Engineering and Pilot teams to achieve the critical goals of Helix data collection and model development Designing and tracking key data collection metrics, performing analyses, and developing tools, processes, and dashboards to improve performance Defining and refining data collection methodologies to meet the evolving needs of the Helix model Requirements:  2-4 years in operations strategy, consulting, startup project management, or similar roles Excellent problem-solving and decision-making abilities Excellent communication skills especially using data Able to work well under pressure while managing competing, time-sensitive demands Proficiency in Google Workspace (e.g., Sheets) and operational management tools Low ego, team player with can-do attitude Bonus Qualifications:  Experience with robotics or AI data collection. A passion for helping scale the deployment of learning humanoid robots. The US base salary range for this full-time position is between $90,000 - $120,000 annually. The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended. 
Project Manager
Product & Operations
Hidden link
The Reflection.jpg

Forward Deployed Engineer (Lead)

Reflection
US.svg
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.Role OverviewWe’re looking for a founding member of Reflection’s Applied AI team, responsible for building our Forward Deployed Engineering function. This team plays a critical role in bridging our cutting-edge AI research with real-world enterprise deployments. As a founding Forward Deployed Engineer, you will own the end-to-end technical strategy, execution, and delivery of complex agentic applications, from early pre-sales discovery through production deployment.Key ResponsibilitiesPartner with Deployment Strategists and Sales to understand enterprise customer needs, architect solutions, and develop transformative agentic applications.Build agentic solutions leveraging state-of-the-art models, orchestrating complex LLM workflows, integrating with enterprise infrastructure, and deploying robust production systems.Collaborate with research teams to adapt and fine-tune models for customer-specific needs, contributing to our internal codebase for inference, fine-tuning, and evaluation.Lead end-to-end deployment across hybrid environments (public cloud, VPC, or on-premises), ensuring scalability, performance, and reliability.Shape the Forward Deployed Engineering organization by defining playbooks, processes, best practices, and technical foundations to support the team's growth.QualificationsStrong software engineering background with experience shipping production-grade systems (Python, Typescript)Proven track record of deploying enterprise software in cloud or hybrid environments using modern DevOps practices (Docker, Kubernetes, and CI/CD).Deep understanding of machine learning concepts and hands-on experience with modern AI stacks, including vector databases, RAG pipelines, agent orchestration, evaluations, and fine-tuning.6+ years of software engineering experience, including 2+ years in a technical leadership capacity delivering AI-driven enterprise solutions (e.g., Lead Forward Deployed Engineer, Tech Lead, or Engineering Manager).Demonstrated ability and interest to work in customer-facing environments, understanding user needs, architecting solutions for real business problems, and delivering tangible outcomes.Self-starter with high agency and ownership, excelling in fast-paced startup environments where playbooks are still being written.What We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
Hidden link
Shield AI.jpg

Engineer II/Senior Engineer, Systems Test (R3255)

Shield AI
US.svg
United States
Full-time
Remote
false
The Hivemind Solutions Test Department is seeking an Engineer II or Senior Test Engineer (Software & Simulation Focus) to join our DC-based team. Since Shield AI's mission is to protect service members and civilians, The Hivemind Solutions Test Department makes this a reality by validating the reliability, safety, and performance of our world-class autonomy software. You’ll lead simulation-driven testing, automation, and software quality efforts for Hivemind, building and executing test frameworks that ensure our autonomy software performs reliably across simulated and real-world mission environments. Shield AI is growing quickly, and we're looking for someone who thrives in fast-paced environments, embraces learning, and brings a high-ownership, mission-driven mindset. This is a chance to have a real impact—advancing autonomy that protects lives—while working with a collaborative, high-performing team. What You'll Do: Develop automated regression, integration, and system-level tests to ensure Hivemind software quality and stability across releasesIdentify, reproduce, and diagnose software issues—collaborating closely with development teams to drive fixes and improvementsCreate and execute test plans and documentation for autonomy software in simulated environments, ensuring mission and safety requirements are metDesign, implement, and maintain simulation-based test frameworks to validate Hivemind autonomy behavior and performanceBuild tools and utilities in Python to automate testing, data processing, and analysis workflowsDevelop test harnesses and data validation tools for Software-in-the-Loop (SIL) / Hardware-in-the-Loop (HIL) / Vehicle-in-the-Loop (VIL) and simulation environmentsContribute to the definition of testing methodologies, metrics, and performance benchmarks for autonomy software validationSupport the scaling and reliability of simulation infrastructure used for large-scale software validationMaintain detailed and accurate test documentation, ensuring traceability and knowledge sharing across teamsWork with developers to integrate test coverage directly into the development pipeline (CI/CD) Required Qualifications: Typically requires a minimum of 3-5 years of related experience with a Bachelor’s degree; or 2-4 years and a Master’s degree; or 2 years with a PhD; or equivalent work experienceProficiency in PythonExperience in robotics, small UAS, or related hands-on engineering projectsStrong analytical, problem-solving, and debugging skillsPassionate about intelligent aircraft systemsExcellent communication and collaboration skills, with the ability to work effectively with cross-functional software, hardware and mechanical engineering teamsAdaptability and a willingness to learn new technologies and methodologies quickly in a fast-paced environment The ability to obtain and maintain a SECRET Clearance. US citizenship is required as only US citizens are eligible for a security clearance.  Preferred Qualifications: Experience testing autonomous systems, robotics software, or flight autonomy algorithmsFamiliarity with C++ and software/hardware integration concepts.Experience with HIL or VIL test environments.Exposure to machine learning or AI-driven autonomy systems.Background in simulation infrastructure design, distributed testing, or test data management.Quality Assurance (QA) experience, including process definition, metrics, and validation trackingExperience with aerospace, defense, or mission-critical softwareActive SECRET clearance preferred
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
Robotics Engineer
Software Engineering
Hidden link
Cohere Health.jpg

Senior/Staff Full-Stack Engineer

Cohere
CA.svg
Canada
Full-time
Remote
true
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!About North:North is Cohere's cutting-edge AI workspace platform, designed to revolutionize the way enterprises utilize AI. It offers a secure and customizable environment, allowing companies to deploy AI while maintaining control over sensitive data. North integrates seamlessly with existing workflows, providing a trusted platform that connects AI agents with workplace tools and applications.As a Senior Full Stack Senior Engineer, you will: Build and ship features for North, our AI workspace platformDevelop autonomous agents that talk to sensitive enterprise dataWrite and ship minimal code that runs in low-resource environments, and has highly stringent deployment mechanismsAs security and privacy are paramount, you will sometimes need to re-invent the wheel, and won’t be able to use the most popular libraries or toolingCollaborate with researchers to productionize state-of-the-art models and techniquesYou may be a good fit if:Have shipped (lots of) fullstack code (Python and React) in productionYou excel in fast-paced environments and can execute while priorities and objectives are a moving targetYou’ve worked in both large enterprises and startupsYou have strong coding abilities and are comfortable working across the stack.You’re able to read and understand, and even fix issues outside of the main code baseYou have built and deployed extremely performant client-side or server-side RAG/agentic applications to millions of users.Bonus: You love art and have an eye for design and a creative mindset - A plus, if you have a portfolio, GitHub or other showcases of your projectsIf some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
Software Engineer
Software Engineering
Hidden link
OpenAI.jpg

Full Stack Software Engineer, Growth

OpenAI
US.svg
United States
Full-time
Remote
false
About the TeamThe ChatGPT team works across research, engineering, product, and design to bring OpenAI’s technology to the world.We seek to learn from deployment and broadly distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. We aim to make our innovative tools globally accessible, transcending geographic, economic, or platform barriers. Our commitment is to facilitate the use of AI to enhance lives, fostered by rigorous insights into how people use our products.About the RoleWe are looking for an experienced fullstack engineer to join our new ChatGPT Growth team to spearhead high-impact projects that amplify the user base of ChatGPT and Plus Subscribers. Your role will include projects such as optimizing account access, notifications, SEO, fostering value discovery, and virality. As we are in the nascent stages of growth at OpenAI, we will rely on you to discover pivotal areas where strategic bets or incremental efforts can catalyze significant impact. We value engineers who are impact-driven, autonomous, adept at discerning crucial insights from experimental results, and have a strong intuition for how to remove barriers to unlocking the magic of ChatGPT.In this role, you will:Drive long-term growth of ChatGPT through a combination of data analysis, product ideation, and experimentation to optimize product experiences.Plan and deploy backend APIs necessary to power these product experiences.Execute on projects by working closely with research, product, design, data science and other members of product teams to land impact on product goals.Create a diverse and inclusive culture that makes all feel welcome while enabling radical candor and the challenging of group-think.You might thrive in this role if you:Shipped features on web that optimize the user funnel, such as landing pages, product pages, purchase flows, search flows, etc.Are highly analytical and have experience designing and implementing A/B tests, with a scientific approach to data-based experiments. You know exactly what and how to track business metrics and KPIs.Have a voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with othersAre comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessaryAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Software Engineer
Software Engineering
Hidden link
Hippocratic AI.jpg

AI Engineer

Hippocratic AI
US.svg
United States
Full-time
Remote
false
About UsHippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.Why Join Our TeamInnovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.For more information, visit www.HippocraticAI.com.We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description.About the RoleAs an AI Engineer at Hippocratic AI, you’ll play a pivotal role in shaping the future of voice-based generative AI in healthcare. You’ll design and build the intelligent systems that power our clinically safe healthcare agents, working at the intersection of large language models, real-time voice, and human-centered product design.This is a deeply hands-on and cross-functional role, partnering closely with AI researchers, product managers, and clinical experts to bring advanced language and speech models into production. From building scalable RAG and multi-agent pipelines to conversational interactions, your work will directly influence how patients and providers safely interact with generative AI at scale.We’re looking for experienced engineers who are passionate about AI in the real world—people who love taking cutting-edge research, turning it into robust products, and advancing the frontier of what’s possible in safe, agentic healthcare systems.What You'll Do:Design, build, and optimize production-grade AI pipelines that power our voice-based generative healthcare agents—from retrieval-augmented generation (RAG) to multi-step reasoning systems.Collaborate cross-functionally with product, clinical, and engineering teams to translate healthcare workflows into safe, scalable, and human-centered AI experiences.Prototype and deploy zero-to-one features using state-of-the-art LLMs, retrieval systems, and streaming architectures—balancing innovation with reliability.Develop and refine AI-native workflows that support real-time, conversational, and long-running interactions across diverse healthcare contexts.Drive continuous improvement in model evaluation, safety testing, and observability, ensuring every agent interaction meets clinical safety standardsWhat You BringMust Have:3+ years of professional experience in software, ML, or AI engineering.Proven track record building and shipping AI- or ML-powered products in production environments.Strong programming skills in Python with experience in distributed systems, APIs, and data pipelines.Deep understanding of prompt engineering, vector databases, and retrieval systems (RAG), voice agents or willingness to learn rapidly.Experience with cloud environments (AWS/GCP/Azure) and modern DevOps practices (Terraform, CI/CD, monitoring).Excellent communication, cross-functional collaboration, and an ability to move fast in high-impact domains.Nice-to-Have:Experience building or deploying LLM-based or multi-agent systems at scale.Hands-on work with speech recognition, text-to-speech, or streaming architectures for real-time AI experiences.Prior exposure to healthcare, safety-critical domains, or regulated product development.If you’re an AI engineer who thrives at the intersection of cutting-edge research and real-world product impact, we’d love to hear from you.Join Hippocratic AI and help shape the future of clinically safe, voice-enabled AI systems that are already transforming patient care at scale.***Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Hidden link
Hippocratic AI.jpg

Technical Program Manager

Hippocratic AI
US.svg
United States
Full-time
Remote
false
About UsHippocratic AI is developing the first safety-focused Large Language Model (LLM) for healthcare. Our mission is to dramatically improve healthcare accessibility and outcomes by bringing deep healthcare expertise to every person. No other technology has the potential for this level of global impact on health.Why Join Our TeamInnovative mission: We are creating a safe, healthcare-focused LLM that can transform health outcomes on a global scale.Visionary leadership: Hippocratic AI was co-founded by CEO Munjal Shah alongside physicians, hospital administrators, healthcare professionals, and AI researchers from top institutions including El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft and NVIDIA.Strategic investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.Team and expertise: We are working with top experts in healthcare and artificial intelligence to ensure the safety and efficacy of our technology.For more information, visit www.HippocraticAI.com.We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA, unless explicitly noted otherwise in the job description.About the RoleAs a Technical Program Manager (TPM) at Hippocratic AI, you’ll play a pivotal role in bringing cutting-edge AI research into production and launching new products and markets for our generative, voice-based healthcare agents.You’ll work at the intersection of AI research, engineering, and product, turning experimental systems into clinically safe, scalable, and regulated products used by healthcare professionals and patients around the world.This means driving complex, cross-functional programs—from early prototyping to product launch—while ensuring reliability, compliance, and real-world impact.This role is ideal for someone who loves translating frontier AI research into production systems, thrives in ambiguity, and enjoys building the connective tissue between teams to deliver ambitious outcomes quickly and safely.What You'll Do:Drive productionization of AI research: translate emerging LLM and agentic system prototypes into deployable, clinically safe products.Lead cross-functional programs spanning research, engineering, product, and clinical operations to bring new AI capabilities to market.Define and own program plans, success metrics, and execution timelines across multiple product areas and markets.Build lightweight processes and tools (using AI, automation, or no-code platforms) to accelerate research-to-product velocity.Ensure launch excellence—overseeing validation, safety reviews, go-to-market readiness, and feedback loops for post-launch learning.What You BringMust Have:3+ years of experience in technical program management, product engineering, or applied AI roles.Proven track record shipping AI or ML-powered products—from prototype to production, ideally in a regulated or safety-critical domain.Strong technical background and ability to engage deeply with AI researchers, ML engineers, and infrastructure teams.Exceptional program design, prioritization, and communication skills, with the ability to align diverse stakeholders.Comfort operating in fast-paced, ambiguous environments where both rigor and speed matter.Nice-to-Have:Experience with LLM, RAG, or multi-agent systems and understanding of model evaluation or deployment workflows.Familiarity with healthcare, compliance, or enterprise SaaS launches.Experience using no-code / low-code platforms (Airtable, Retool, Zapier) or AI automation tools to streamline operations.Prior work managing international product launches or new market expansion for AI technologies.If you’re excited about turning AI breakthroughs into real-world products that improve patient care and safety, this is your chance to lead at the frontier.Join Hippocratic AI and help bring clinically safe generative AI to healthcare at scale.
Program Manager
Software Engineering
DevOps Engineer
Data Science & Analytics
Hidden link
No job found
Your search did not match any job. Please try again
Department
Clear
Category
Clear
Country
Clear
Job type
Clear
Remote
Clear
Only remote job
Company size
Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.