⚠️ Sorry, this job is no longer available.

AI Software Engineer Jobs

Latest roles in AI Software Engineer, reviewed by real humans for quality and clarity.

All Jobs

Showing 6179  of 79 jobs
GPTZero.jpg

Software Engineering Intern

GPTZero
-
CA.svg
Canada
US.svg
United States
Intern
Remote
false
About GPTZeroGPTZero is on a mission to restore trust and transparency on the internet. As the leading AI detection platform, we empower educators, students, journalists, marketers, and writers to navigate the evolving landscape of AI-generated content. With millions of users and institutions relying on us, we’re building a category-defining company at the intersection of AI and information integrity. Our team comes from high-performing engineering cultures, including Uber, Meta, Amazon, Affirm, and leading AI research labs, including Princeton, Caltech, MILA, and Vector.What we're looking forIn this role, you'll build the next-gen platform to verify the origin, quality, and factuality of the world's information. The ideal intern candidate is someone who is a voracious learner, has a history of building applications from the ground up, and can break down complex challenges.You'll be working on a fast-paced team of passionate builders and partnering closely with our ML and design teams, to create industry-defining software that has attracted over 2M users globally. Past intern projects have been the focus of demos to VCs and state-level policy leaders.ResponsibilitiesBuild and deploy high-impact, intuitive, AI-driven web apps in React, Node, and TailwindDevelop top-requested features from our users on our dashboard, chrome extensions, and APIUtilize product analytics to make data-driven product decisionsCollaborate with our ML, design, and business teams to develop new product initiativesWear multiple hats and work across the product stackQualificationsComfortable building end-to-end experiences from backend to CSS+2 years of experience with modern web frameworks (Express, Next.js, Typescript, and React)+1 years of experience with databases (PostgreSQL, AWS RDS)Highly motivated to make positive societal impactWork with 5 hours of overlap with Eastern Standard TimeBonus:strong open-source portfolioexperience working in an early-stage startup environmentexperience as a peer-edited writerWho you'll be joiningOur TeamYou will be working directly withAlex (our CTO) R&D at Uber self-driving division and Facebook, 3 patents in MLEdmond (our fullstack lead) Obama Foundation Scholar at Columbia University and Editor-in-chief of The Andela WayOlivia (our head of design) on translating your research into outputs for millions of users. Edward (our CEO, ex-Bellingcat, Microsoft, BBC investigative journalism) to craft the messages we send to our community, and shape the GPTZero brand.Additionally, you will be working with an experienced (eg. ex-Google, Meta, Microsoft, Bloomberg ML, Uber, Vector, MILA), diverse (eg. an engineering team with both Y-combinator and Obama scholarship recipients, a designer with art featured in the Met), and driven (eg. an operator who has scaled a company to 100M+ revenue and is committed to doing it again) group of individuals, described by one investor as one of the strongest founding teams seen in their career.Together, we are committed to making a permanent impact on the future of writing, and on humanity.Our Angels and Advisors Tom Glocer (Legendary Reuters CEO)Mark Thompson (Legendary NYT CEO and current CNN chief executive)Jack Altman (CEO of Lattice, brother of Sam Altman)Karthik Narasimhan (Princeton NLP Professor, co-author of OpenAI’s original GPT paper) Emad Mostaque (CEO of Stability AI)Doug Herrington (CEO of Worldwide Amazon Stores)Brad Smith (President of Microsoft)Tripp Jones (Partner at Uncork Capital)Ali Partovi (co-founder of Code.org, early investor in Dropbox and Airbnb)Russ Heddleston (CEO of Docsend)Alex Mashrabov (Snapchat, Director of AI)Faizan Mehdi (Affinity, Director of Demand Generation)Our Perks🏥 Health, dental, and mental health benefits💻 Hybrid work in Toronto and NYC offices🚀 Competitive salary🏝 Flexible PTO🎉 Regular company retreats💡Mentorship opportunities with our world-class team, advisors, and investors🙌 Wellness and learning stipend
Software Engineer
Software Engineering
Hidden link
Scale AI.jpg

Human Frontier Collective Fellow - Medical

Scale AI
-
earth.svg
European Union
CA.svg
Canada
AU.svg
Australia
Contractor
Remote
true
About the Program The Human Frontier Collective (HFC) brings together domain experts across research, clinical practice, and advanced professional work to contribute to high-impact projects in technology and reasoning systems. As a Medical Fellow, you will apply your clinical training to help develop and evaluate tools that interact with complex medical content. Your work will directly inform how emerging systems reason through diagnosis, triage, and patient care scenarios. PLEASE NOTE: This is a remote, contract-based role. We welcome international applicants based in the EU, Canada, Australia, and New Zealand. Engagements will run for approximately six months with flexible scheduling.  What You'll Do Design Complex Clinical Scenarios Create detailed case studies and clinical reasoning tasks across specialties to test how systems interpret symptoms, assess risks, and prioritize care. Evaluate Decision-Making Approaches Review and critique structured outputs for clinical safety, accuracy, and sound judgment. Identify errors in logic or inappropriate treatment suggestions. Shape Reasoning Frameworks Provide structured, example-driven feedback to improve how models understand differentials, guidelines, and clinical narratives. Work Across Specialized Projects Contribute to targeted workstreams in internal medicine, emergency care, radiology, psychiatry, and other domains aligned with your background. Who Should Apply Graduates of a licensed medical school (MD or DO), who are board certified and in residency / post residency / practicing / or with extensive practice Recent or current clinical experience in inpatient, outpatient, or academic settings Strong writing and clinical reasoning skills, with comfort assessing edge cases and nuanced scenarios Backgrounds welcome from internal medicine, emergency medicine, radiology, surgery, psychiatry, and related fields Nurse practitioners, physician assistants, and registered nurses with deep clinical experience may also be considered Fluency in written English is required Why Join the HFC? Contribute to High-Impact Work Apply your medical expertise to problems that require real judgment, not rote recall. Your input will directly influence how complex reasoning tasks are handled. Flexible Remote Work + Competitive Compensation Set your own schedule and contribute from anywhere. Most participants engage between 5-20 hours per week. Pathways for Growth High-impact contributors may be invited to join additional review projects, advisory roles, or research collaborations. How to Apply Submit your CV and a brief summary of your clinical background. Selected candidates will be invited to complete a short trial task and interview.PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI.  Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Scale AI.jpg

Applied AI Engineering Manager, Enterprise

Scale AI
USD
0
212000
-
254400
US.svg
United States
Full-time
Remote
false
AI is becoming vitally important in every function of our society. At Scale, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we’re accelerating the usage of frontier data and models by building complex agents for enterprises around the world through our Scale Generative AI Platform (SGP). The SGP ML team works on the front lines of this AI revolution. We interface directly with clients to build cutting edge products using the arsenal of proprietary research and resources developed at Scale. As an AAI Engineering Manager, you’ll manage a team of high-calibre Applied AI Engineers + MLEs who work with clients to train ML models to satisfy their business needs. Your team’s work will range from training next-generation AI cybersecurity firewall LLMs to training foundation agentic action models making predictions about business-saving outcomes. You will guide your  team towards using data-driven experiments to provide key insights around model strengths and inefficiencies in an effort to improve products. If you are excited about shaping the future of the modern AI movement, we would love to hear from you! You will:  Train state of the art models, developed both internally and from the community, in production to solve problems for our enterprise customers.  Manage a team of 5+ Applied AI Engineers / ML Engineers Work with product and research teams to identify opportunities for ongoing and upcoming services. Explore approaches that integrate human feedback and assisted evaluation into existing product lines.  Create state of the art techniques to integrate tool-calling into production-serving LLMs. Work closely with customers - some of the most sophisticated ML organizations in the world - to quickly prototype and build new deep learning models targeted at multi-modal content understanding problems. Ideally you’d have: At least 3 years of model training, deployment and maintenance experience in a production environment At least 1-2 years of management or tech leadership experience Strong skills in NLP, LLMs and deep learning  Solid background in algorithms, data structures, and object-oriented programming Experience working with a cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment Experience building products with LLMs including knowing the ins and outs of evaluation, experimentation, and designing solutions to get the most of the models PhD or Masters in Computer Science or a related field Nice to haves: Experience in dealing with large scale AI problems, ideally in the generative-AI field Demonstrated expertise in large vision-language models for diverse real-world applications, e.g. classification, detection, question-answering, etc.  Published research in areas of machine learning at major conferences (NeurIPS, ICML, EMNLP, CVPR, etc.) and/or journals Strong high-level programming skills (e.g., Python), frameworks and tools such as DeepSpeed, Pytorch lightning, kubeflow, TensorFlow, etc.  Strong written and verbal communication skills to operate in a cross functional team environment Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$212,000—$254,400 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI.  Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Hidden link
Lambda.jpg

Storage Engineering Manager

Lambda AI
USD
0
330000
-
495000
US.svg
United States
Full-time
Remote
false
Lambda is the #1 GPU Cloud for ML/AI teams training, fine-tuning and inferencing AI models, where engineers can easily, securely and affordably build, test and deploy AI products at scale. Lambda’s product portfolio includes on-prem GPU systems, hosted GPUs across public & private clouds and managed inference services – servicing government, researchers, startups and Enterprises world-wide. If you'd like to build the world's best deep learning cloud, join us.  *Note: This position requires presence in our San Jose office location 4 days per week; Lambda’s designated work from home day is currently Tuesday. Engineering at Lambda is responsible for building and scaling our cloud offering. Our scope includes the Lambda website, cloud APIs and systems as well as internal tooling for system deployment, management and maintenance.In the world of distributed AI, raw GPU and CPU horsepower is just a part of the story. High-performance networking and storage are the critical components that enable and unite these systems, making groundbreaking AI training and inference possible.The Lambda Infrastructure Engineering organization forges the foundation of high-performance AI clusters by welding together the latest in AI storage, networking, GPU and CPU hardware.Our expertise lies at the intersection of:High-Performance Distributed Storage Solutions and Protocols: We engineer the protocols and systems that serve massive datasets at the speeds demanded by modern clustered GPUs.Dynamic Networking: We design advanced networks that provide multi-tenant security and intelligent routing without compromising performance, using the latest in AI networking hardware.Compute Virtualization: We enable cutting-edge virtualization and clustering that allows AI researchers and engineers to focus on AI workloads, not AI infrastructure, unleashing the full compute bandwidth of clustered GPUs.About the Role:We are seeking a seasoned Storage Engineering Manager with experience in the specification, evaluation, deployment, and management of HPC storage solutions across multiple datacenters to build out a world-class team. You will hire and guide a team of storage engineers in building storage infrastructure that serves our AI/ML infrastructure products, ensuring the seamless deployment and operational excellence of both the physical and logical storage infrastructure (including proprietary and open source solutions).Your role is not just to manage people, but to serve as the ultimate technical and operational authority for our high-performance, petabyte-scale storage solutions.Your leadership will be pivotal in ensuring our systems are not just high-performing, but also reliable, scalable, and manageable as we grow toward exascale.This is a unique opportunity to work at the intersection of large-scale distributed systems and the rapidly evolving field of artificial intelligence infrastructure. This is an opportunity to have a significant impact on the future of AI. You will be building the foundational infrastructure that powers some of the most advanced AI research and products in the world.What You’ll DoTeam Leadership & Management:Grow/Hire, lead, and mentor a top-talent team of high-performing storage engineers delivering HPC, petabyte-scale storage solutions.Foster a high-velocity culture of innovation, technical excellence, and collaboration.Conduct regular one-on-one meetings, provide constructive feedback, and support career development for team members.Drive outcomes by managing project priorities, deadlines, and deliverables using Agile methodologies.Technical Strategy & Execution:Drive the technical vision and strategy for Lambda distributed storage solutions.You will lead storage vendor selection criteria, vendor selection, and vendor relationship management (support, installation, scheduling, specification, procurement).Manage team in storage lifecycle management (installation, cabling, capacity upgrades, service, RMA, updating both hardware and software components as needed).You will guide choices around optimization of storage pools, sharding, and tiering/caching strategies.Lead team in tasks related to multi-tenant security, tenant provisioning, metering integration, storage protocol interconnection, and customer data-migration.Guide Storage SREs in development of scripting and automation tools for configuration management, monitoring, and operational tasks.Guide team in problem identification, requirements gathering, solution ideation, and stakeholder alignment on engineering RFCs.Lead the team in supporting customers.Cross-Functional Collaboration:Collaborate with the HPC Architecture team on drive selection, capacity determination, storage networking, cache placement, and rack layouts.Work closely with the storage software teams and networking teams to execute on cross-functional infrastructure initiatives and new data-center deployments including integration of storage protocols across a variety of on-prem storage solutions.Work with procurement data-center operations, and fleet engineering teams to deploy storage solutions into new and existing data centers.Work with vendors to troubleshoot customer performance, reliability, and data-integrity issues.Work closely with Networking, Compute, and Storage Software Engineering teams to deploy high-performance distributed storage solutions to serve AI/ML workloads.Partner with the fleet engineering team to ensure seamless deployment, monitoring, and maintenance of the distributed storage solutions.Innovation & Research:Stay current with the latest trends and research into AI and HPC storage technologies and vendor solutions.Guide team in investigating strategies for using Nvidia SuperNIC DPUs for storage edge-caching, offloading, and GPUDirect Storage capabilities.Work with the Lambda product team to uncover new trends in the AI inference and training product category that will inform emerging storage solutions.Encourage and support the team in exploring new technologies and approaches to improve system performance and efficiency.YouExperience:10+ years of experience in storage engineering with at least 5+ years in a management or lead role.Demonstrated experience leading a team of storage engineers and storage SREs on complex, cross-functional projects in a fast-paced startup environment.Extensive hands-on experience in designing, deploying, and maintaining distributed storage solutions in a CSP (Cloud Service Provider), NCP (Neo-Cloud provider), HPC-infrastructure integrator, or AI-infrastructure company.Experience with storage solutions serving storage volumes at a scale greater than 20PB.Strong project management skills, leading high-confidence planning, project execution, and delivery of team outcomes on schedule.Extensive experience with storage site reliability engineering.Experience with one or more of the following in an HPC or AI Infrastructure environment: Vast, DDN, Pure Storage, NetApp, Weka.Experience deploying CEPH at scale greater than 25PB.Technical Skills:Experience in serving one or more of the following storage protocols: object storage (e.g., S3), block storage (e.g., iSCSI), or file storage (e.g., NFS, SMB, Lustre).Professional individual contributor experience as a storage engineer or storage SRE.Familiarity with modern storage technologies (e.g., NVMe, RDMA, DPUs) and their role in optimizing performance.People Management:Experience building a high-performance team through deliberate hiring, upskilling, planned skills redundancy, performance-management, and expectation setting.Nice to HaveExperience:Experience driving cross-functional engineering management initiatives (coordinating events, strategic planning, coordinating large projects).Experience with NVidia SuperNIC DPUs for edge-caching (such as implementing GPUDirect Storage).Technical Skills:Deep experience with Vast, Weka and/or NetApp in an HPC or AI Infrastructure environment.Deep experience implementing CEPH in an HPC or AI infrastructure environment at a scale greater than 100PB.People Management:Experience driving organizational improvements (processes, systems, etc.)Experience training, or managing managers.Salary Range InformationThe annual salary range for this position has been set based on market data and other factors. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description. About LambdaFounded in 2012, ~400 employees (2025) and growing fastWe offer generous cash & equity compensationOur investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitabilityOur research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOGHealth, dental, and vision coverage for you and your dependentsWellness and Commuter stipends for select roles401k Plan with 2% company match (USA employees)Flexible Paid Time Off Plan that we all actually useA Final Note:You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.Equal Opportunity EmployerLambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.
DevOps Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Hidden link
Lambda.jpg

Storage Protocols Engineering Manager

Lambda AI
USD
0
330000
-
495000
US.svg
United States
Full-time
Remote
false
Lambda is the #1 GPU Cloud for ML/AI teams training, fine-tuning and inferencing AI models, where engineers can easily, securely and affordably build, test and deploy AI products at scale. Lambda’s product portfolio includes on-prem GPU systems, hosted GPUs across public & private clouds and managed inference services – servicing government, researchers, startups and Enterprises world-wide. If you'd like to build the world's best deep learning cloud, join us.  *Note: This position requires presence in our San Francisco office location 4 days per week; Lambda’s designated work from home day is currently Tuesday. Engineering at Lambda is responsible for building and scaling our cloud offering. Our scope includes the Lambda website, cloud APIs and systems as well as internal tooling for system deployment, management and maintenance.In the world of distributed AI, raw GPU and CPU horsepower is just a part of the story. High-performance networking and storage are the critical components that enable and unite these systems, making groundbreaking AI training and inference possible.The Lambda Infrastructure Engineering organization forges the foundation of high-performance AI clusters by welding together the latest in AI storage, networking, GPU and CPU hardware.Our expertise lies at the intersection of:High-Performance Distributed Storage Solutions and Protocols: We engineer the protocols and systems that serve massive datasets at the speeds demanded by modern clustered GPUs.Dynamic Networking: We design advanced networks that provide multi-tenant security and intelligent routing without compromising performance, using the latest in AI networking hardware.Compute Virtualization: We enable cutting-edge virtualization and clustering that allows AI researchers and engineers to focus on AI workloads, not AI infrastructure, unleashing the full compute bandwidth of clustered GPUs.About the Role:We are seeking an experienced Software Engineering Manager with a history in the development of storage protocols and distributed storage systems to lead a team of Storage Software Engineers and Distributed Systems Engineers in the design, development, and optimization of cutting-edge distributed storage solutions. Your team will be responsible for building high-performance, scalable, and reliable implementations of object, block, and file protocols, specifically tailored to serve performance demanding AI training and inference workloads. This is a unique opportunity to work at the intersection of large-scale distributed systems and the rapidly evolving field of artificial intelligence infrastructure. You will be building the foundational infrastructure that powers some of the most advanced AI research and products in the world.What You’ll DoTeam Leadership & Management:Grow/Hire, lead, and mentor a top-talent team of high-performing software engineers focused on delivering distributed storage protocols.Foster a high-velocity culture of innovation, technical excellence, and collaboration.Conduct regular one-on-one meetings, provide constructive feedback, and support career development for team members.Drive outcomes by managing project priorities, deadlines, and deliverables using Agile methodologies.Technical Strategy & Execution:Drive the technical vision and strategy for our distributed storage protocols (e.g., S3, NFS, iSCSI) and their underlying distributed systems.Oversee the development of highly optimized storage solutions designed to meet the performance demands of AI/ML workloads (e.g., high throughput, low latency, optimization for AI workload access patterns).Lead the team in tackling complex distributed systems challenges, including concurrency, consistency, fault tolerance, and data durability across multiple data centers.Guide engineering team in problem identification, requirements gathering, solution ideation, and stakeholder alignment on engineering RFCs.Deeply understand the performance bottlenecks of existing storage systems and guide the team in developing innovative solutions to overcome them.Lead the team in supporting customers.Cross-Functional Collaboration:Work closely with AI/ML research and products teams to understand customers storage needs and translate them into technical requirements.Work closely with the product engineering team to deliver high quality products to customers to meet their unique needs.Collaborate with product management to define the product roadmap and prioritize features.Work closely with HPC Architecture, Networking, Compute, and Storage Engineering teams to deploy high-performance distributed storage protocols to serve AI/ML workloads.Partner with fleet engineering and platforms teams to ensure seamless deployment, monitoring, and maintenance of the distributed storage protocols.Work in lock-step with the Storage Engineering team to provide reliable storage products on top of a variety of physical storage solutions.Innovation & Research:Stay current with the latest trends and research in distributed systems, storage technologies, and AI/ML hardware/software advancements.Work with the Lambda product team to uncover new trends in the AI inference and training product category.Encourage and support the team in exploring new technologies and approaches to improve system performance and efficiency.YouExperience:10+ years of experience in software development, with at least 5+ years in a management or lead role in storage software engineering.Demonstrated experience leading a team of software engineers on complex, cross-functional projects in a fast-paced startup environment.Extensive hands-on experience in designing and implementing distributed storage systems.Experience with storage protocols serving storage volumes at a scale greater than 20PB.Experience developing and tuning distributed storage protocols across scaling challenges using namespacing, sharding, and caching strategies.Familiarity with deploying and running applications on Kubernetes or other container orchestration systems (e.g., AWS ECS, Hashicorp Nomad).Strong project management skills, leading high-confidence planning, project execution, and delivery of team outcomes on schedule.Technical Skills:Knowledge in one or more of the following storage protocols: object storage (e.g., S3), block storage (e.g., iSCSI), or file storage (e.g., NFS, SMB, Lustre).Professional individual contributor experience in languages such as C++, Go, Rust, or Python.Familiarity with modern storage technologies (e.g., NVMe, RDMA) and their role in optimizing performance.Experience with containerization technologies (e.g., Docker, Kubernetes) and their integration with storage solutions.Distributed Systems Knowledge:Solid understanding of distributed systems concepts, including consensus algorithms (e.g., Raft, Paxos), distributed caching, failure recovery, consistency models (e.g., eventual consistency), fault tolerance, data replication, load balancing, and distributed consensus algorithmsPeople Management:Experience building a high-performance team through deliberate hiring, upskilling, planned skills redundancy, performance-management, and expectation setting.Nice to HaveExperience:Demonstrated delivery of distributed storage protocols in a CSP (Cloud Service Provider), NCP (Neo-Cloud provider), HPC-infrastructure integrator, or AI-infrastructure company.Experience with storage protocols serving storage volumes at a scale greater than 100PB.Implementation of distributed storage protocols backed by a variety of storage solutions, performance-tuned for AI/ML workloads.Experience driving cross-functional engineering management initiatives (coordinating deployments, strategic planning, coordinating large projects).Technical Skills:Deep expertise in one or more of the following storage protocols: object storage (e.g., S3), block storage (e.g., iSCSI), or file storage (e.g., NFS, SMB, Lustre).Strong programming skills in languages such as C++, Go, Rust, or Python.In-depth knowledge of operating system internals, including file systems, caching, and I/O scheduling.AI/ML Domain Knowledge:Experience working with AI/ML training and inference frameworks (e.g., TensorFlow, PyTorch).Understanding of the unique data access patterns and performance requirements of AI workloads.Distributed Systems Knowledge:Proven ability to design and debug highly concurrent and fault-tolerant systems.People Management:Experience driving organizational improvements (processes, systems, etc.)Experience training, or managing managers.Salary Range InformationThe annual salary range for this position has been set based on market data and other factors. However, a salary higher or lower than this range may be appropriate for a candidate whose qualifications differ meaningfully from those listed in the job description. About LambdaFounded in 2012, ~400 employees (2025) and growing fastWe offer generous cash & equity compensationOur investors include Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.We are experiencing extremely high demand for our systems, with quarter over quarter, year over year profitabilityOur research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOGHealth, dental, and vision coverage for you and your dependentsWellness and Commuter stipends for select roles401k Plan with 2% company match (USA employees)Flexible Paid Time Off Plan that we all actually useA Final Note:You do not need to match all of the listed expectations to apply for this position. We are committed to building a team with a variety of backgrounds, experiences, and skills.Equal Opportunity EmployerLambda is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law.
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
Hidden link
X.jpg

Backend Engineer - Enterprise Agent (London)

X AI
USD
180000
-
440000
GB.svg
United Kingdom
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the team The Enterprise Agents Engineering Team at xAI is a team of builders solving the hardest problems in applied AI: integrating with messy systems, orchestrating complex workflows, and harnessing AI to transform business operations. We build full-stack solutions with ownership of end-to-end product execution from ideation to deployment, collaborating closely with our research team to incorporate cutting-edge AI advancements into robust and reliable user-centric solutions. Through an iterative development process, we work hand-in-hand with customers to gather feedback, refine features, and evolve products to address real-world challenges. About the role An ideal candidate meets at least the following requirements: Expert knowledge of either Rust or C++, Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems, Knowledge of service observability and reliability best practices, Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and CockroachDB Additionally, any of the below points will help a candidate stand out: Expert knowledge of Python, Experience with Docker, Kubernetes, and containerized applications, Expert knowledge of TypeScript, Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping), Hands-on experience with LLM APIs, embeddings, or RAG patterns Track record of delivering user-facing software at scale. Interview process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15 minute interview (“phone interview”) during which a member of our team will ask some basic technical questions. If you clear the initial phone interview, you will enter the main process, which consists of at least two technical interviews: Coding interview in Rust or C++. Distributed systems design interview. Benefits Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks. Annual Salary Range $180,000 - $440,000 USD  Benefits Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks. xAI is an equal opportunity employer. California Consumer Privacy Act (CCPA) Notice
Software Engineer
Software Engineering
Machine Learning Engineer
Data Science & Analytics
Hidden link
Synthesia.jpg

Lead/Senior Software Engineer, front end

Synthesia
EUR
100000
100000
-
0
GB.svg
United Kingdom
CH.svg
Switzerland
earth.svg
Anywhere
Full-time
Remote
true
Welcome to the video first world From your everyday PowerPoint presentations to Hollywood movies, AI will transform the way we create and consume content. Today, people want to watch and listen, not read — both at home and at work. If you’re reading this and nodding, check out our brand video. Despite the clear preference for video, communication and knowledge sharing in the business environment are still dominated by text, largely because high-quality video production remains complex and challenging to scale—until now…. Meet Synthesia We're on a mission to make video easy for everyone. Born in an AI lab, our AI video communications platform simplifies the entire video production process, making it easy for everyone, regardless of skill level, to create, collaborate, and share high-quality videos. Whether it's for delivering essential training to employees and customers or marketing products and services, Synthesia enables large organizations to communicate and share knowledge through video quickly and efficiently. We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2. In February 2024, G2 named us as the fastest growing company in the world. Today, we're at a $2.1bn valuation and we recently raised our Series D. This brings our total funding to over $330M from top-tier investors, including Accel, Nvidia, Kleiner Perkins, Google and top founders and operators including Stripe, Datadog, Miro, Webflow, and Facebook. What you'll do at Synthesia: You will work end-to-end, contributing to our client application written in React and our monolithic backend written in Python, and managing the safe release of your features to our customer base. You will have sole ownership of projects that span months, requiring you to have the ability to break a problem down into small steps that can be delivered and validated iteratively. You will work directly with the product manager responsible for your project, meaning that you will be expected to ideate and focus on the commercial problem that you're solving, and you will have the opportunity to shape the direction of the product. You will evaluate your own work, leveraging our data pipeline and frameworks that we have established to understand the impact your features have on our commercial objectives and pivoting where necessary. You will consider the long-term direction of the team, making sure that we are developing the engineering capabilities that will allow us to stay ahead of the challenges we are likely to encounter in 6-12 months' time.   What we're looking for: At least seven (7) years of experience as a software engineer, at least 3 on the senior/lead level. You have experience in a high-performing engineering team that is operating at scale. This could come from a scale-up environment or a more established organization recognised for building and shipping with a great engineering culture. An ability to work across the stack, with deep knowledge on client side, experience in implementing complex UI interactions is ideal. FE only experience is OK if you are happy to occasionally help out with BE. Relevant engineering experience for a team building an enterprise-grade SaaS product delivering AI-powered video generation; billing systems, experimentation platforms, video delivery systems, online editors, real-time collaboration and so on. Strong alignment with commercial success. Previous leadership experience of smaller teams is a plus.    Why join us? We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why, Our culture At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. You can find out more about these principles here. Serving 50,000+ customers (and 50% of the Fortune 500) We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2. Proprietary AI technology Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world-class AI researchers and engineers. Learn more about our AI Research Lab and the team behind. AI Safety, Ethics and Security AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence's impact on our society is still unfolding, our position is clear: People first. Always.  Learn more about our commitments to AI Ethics, Safety & Security.   The hiring process: 30-40min call with our technical recruiter 45min call with engineers about your past projects Take-home assignment (no alternative is offered) - does not have a deadline and is syntax-agnostic, so you're welcome to use the tools and languages you're most comfortable with. That said, we strongly prefer contributions using our core stack: React/TypeScript on the frontend, and Python/Flask on the backend. 60min technical discussion 30min call with leadership  The process does not need to take long - we can be done in seven working days.   Other important info: This is a remote role from an EU country, UK or Switzerland.  The salary starts at EUR/GBP/CHF 100.000 base + stock option plan. This is full-time employment only - no contractors possible - usually through OysterHR. Everyone at Synthesia gets 25 days of leave + local holidays (no extra paid or unpaid leave possible). We only sponsor VISA if you are in the UK/EU country already and need support - we do not relocate people.
Software Engineer
Software Engineering
Hidden link
Appier.jpg

Senior RTB Product Manager (Taiwan)

Appier
-
TW.svg
Taiwan
Full-time
Remote
false
About Appier  Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information. About the role [The seniority/title is determined by job-related skills, experience, and evaluation after the interview.]  We are looking for a data-driven and collaborative individual to join our team as an RTB Advertising Optimization PM. In this role, you will work closely with data scientists and campaign managers to improve RTB (Real-Time Bidding) campaign performance through model enhancements, strategic insights, and cross-team collaboration. You will have the opportunity to run large-scale A/B testing experiments on billions of ad requests daily, identify key signals that drive performance, and shape the optimization direction based on real-time data and industry trends. This is an ideal opportunity for someone who is passionate about advertising technology, machine learning applications, and performance marketing. Key Responsibilities: Partner with data scientists and campaign managers to improve RTB model and algorithm performance. Analyze competitor strategies and market trends to identify optimization opportunities. Drive data-informed decision-making to improve campaign outcomes. Design and evaluate A/B experiments at scale using billions of daily auction requests. Research and test end-to-end RTB ad serving behaviors to uncover areas for improvement. Explore and evaluate potential new signals to enhance model effectiveness. Collaborate with the SSP partnership team to optimize traffic quality and supply-side performance. Coordinate cross-functional requirements with strong communication and alignment across teams. Qualifications: Education: Background in Engineering, Data Science, Marketing, or related fields is a plus. Availability: Must be available full-time; based in TW preferred. Language Skills: Proficient in English and Mandarin; able to clearly articulate ideas and findings across technical and non-technical audiences. Bonus: Experience in digital advertising campaign operations, media buying, or performance optimization—especially within RTB environments—is a strong plus.   #LI-AK1
Product Manager
Product & Operations
Hidden link
voize.jpg

ML Engineer - Speech (m/f/d)

Voize
-
GE.svg
Germany
Full-time
Remote
true
🎤 Why voize? Because we’re more than just a job!At voize, we’re revolutionizing the healthcare industry with AI: Nurses simply speak their documentation into their smartphones, and our AI automatically generates the correct entries. This saves each nurse an average of 39 minutes per day, improves the quality of documentation, and makes their daily work much more rewarding.voize is YCombinator-funded, already in use at over 600 senior care homes, and has grown by 100% in the last 90 days. Our customers save over 3.5 million hours annually – time spent on people instead of paperwork.But this is just the beginning. With our self-developed voize AI, we’re transforming not only the healthcare industry, but also have the potential to create value in many other sectors – from healthcare to inspections.💡 Your Mission:If you're a Machine Learning Engineer experienced with Speech recognition and are excited to work at the cutting edge of product design, applied ML research, and MLOps, then go ahead and apply! With us, you'll build products with direct user feedback, train AI models with real data, and ship new features to production every day.🤝 Your Skillset – What you bring to the tableSeveral years hands-on experience in Deep Learning for speech recognition, including developing and optimising ASR-systems (not just academic research)Excellent foundation in STT (Speech-to-Text) system development with a focus on real-world applicationsExperience with owning the ML process end-to-end: from concept and exploration, to model productionization, maintenance, monitoring and optimizationShipped ML models to production with Python and PyTorchTrained new models from scratch and not just fine-tuning existing ones🚀 Your Daily Business – No two days are alikeTake ownership for the design, training, evaluation, and deployment of our deep learning models in the space of speech recognitionThe models you build and refine are at the heart of our applications and directly impact the end-userYou'll get to engineer large self-supervised trainings as well as fast inference for mobile devices and hosted environments🎯 Our Success Mindset – How we work at voizeResilience is one of your strengths – you see challenges as opportunities, not obstaclesIterative working suits you – you test, learn, and improve constantly instead of waiting for perfectionCommunication & feedback come naturally to you – you openly address issues and both give and receive constructive feedback🌱 Growing together – what you can expect at voizeBecome a co-creator of our success with virtual stock optionsOur office is in Berlin, and we offer remote workWe provide flexible working hours because you know best when you work most efficiently!Access to various learning platforms (e.g., Blinkist, Audible, etc.)We have an open culture and organize regular work weeks and team events to collaborate and bondWe are a fast-growing startup, so you'll encounter various challenges, providing the perfect foundation for rapid personal growthYour work will make a real impact, helping alleviate the workload for healthcare professionalsFree Germany Ticket and Urban Sports Club membership30 days of vacation – plus your birthday off✨ Ready to talk? Apply now! 🚀We look forward to your application and can’t wait to meet you – no matter who you are or what background you have!
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Hidden link
Faculty.jpg

Senior Data Scientist

Faculty
0
0
-
0
GB.svg
United Kingdom
Full-time
Remote
false
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With more than a decade of experience, we provide over 350 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme. Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world.We operate a hybrid way of working, meaning that you'll split your time across client location, Faculty's Old Street office and working from home depending on the needs of the project. For this role, you can expect to be client-side for up-to three days per week at times and working either from home or our Old street office for the rest of your time. What you'll be doing:As a Senior Data Scientist in our Defence business unit you will lead project teams that deliver bespoke algorithms to our clients across the defence and national security sector. You will be responsible for conceiving the data science approach, for designing the associated software architecture, and for ensuring that best practices are followed throughout. You will help our excellent commercial team build strong relationships with clients, shaping the direction of both current and future projects. Particularly in the initial stages of commercial engagements, you will guide the process of defining the scope of projects to come with an emphasis on technical feasibility. We consider this work as fundamental towards ensuring that Faculty can continue to deliver high-quality software within the allocated timeframes.You will play an important role in the development of others at Faculty by acting as the designated mentor of a small number of data scientists, and by supporting the professional growth of data scientists on the project team. The latter includes giving targeted support where needed, and providing step-up opportunities where helpful.Faculty has earned wide recognition as a leader in practical data science. You will actively contribute to the growth of this reputation by delivering courses to high-value clients, by talking at major conferences, by participating in external roundtables, or by contributing to large-scale open-source projects. You will also have the opportunity to teach on the fellowship about topics that range from basic statistics to reinforcement learning, and to mentor the fellows through their 6-week project.Thanks to Faculty platform, you will have access to powerful computational resources, and you will enjoy the comforts of fast configuration, secure collaboration and easy deployment. Because your work in data science will inform the development of our AI products, you will often collaborate with software engineers and designers from our dedicated product team.Who we're looking for: Senior experience in either a professional data science position or a quantitative academic fieldStrong programming skills as evidenced by earlier work in data science or software engineering. Although your programming language of choice (e.g. R, MATLAB or C) is not important, we do require the ability to become a fluent Python programmer in a short timeframeAn excellent command of the basic libraries for data science (e.g. NumPy, Pandas, Scikit-Learn) and familiarity with a deep-learning framework (e.g. TensorFlow, PyTorch, Caffe)A high level of mathematical competence and proficiency in statisticsA solid grasp of essentially all of the standard data science techniques, for example, supervised/unsupervised machine learning, model cross validation, Bayesian inference, time-series analysis, simple NLP, effective SQL database querying, or using/writing simple APIs for models. We regard the ability to develop new algorithms when an innovative solution is needed as a fundamental skillA leadership mindset focussed on growing the technical capabilities of the team; a caring attitude towards the personal and professional development of other data scientists; enthusiasm for nurturing a collaborative and dynamic cultureAn appreciation for the scientific method as applied to the commercial world; a talent for converting business problems into a mathematical framework; resourcefulness in overcoming difficulties through creativity and commitment; a rigorous mindset in evaluating the performance and impact of models upon deploymentSome commercial experience, particularly if this involved client-facing work or project management; eagerness to work alongside our clients; business awareness and an ability to gauge the commercial value of projects; outstanding written and verbal communication skills; persuasiveness when presenting to a large or important audienceExperience leading a team of data scientists (to deliver innovative work according to a strict timeline) as well as experience in composing a project plan, in assessing its technical feasibility, and in estimating the time to deliveryWhat we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.
Data Scientist
Data Science & Analytics
Hidden link
Intangible.jpg

General Interest

Intangible
-
US.svg
United States
Full-time
Remote
true
About IntangibleIntangible is a spatial intelligence AI company for creatives in film, advertising, events, marketing and interactive media. We make constructing 3D experiences dramatically faster, cheaper and more collaborative for creative, distributed teams. We are an interdisciplinary team of engineers, artists and designers that have built and shipped multiple end-to-end products at companies like Apple, Unity, Pixar, ILM and Electronic Arts. Intangible is seed funded by a16z Speedrun, Crosslink Capital and prominent angels.We are a design-obsessed team shipping a groundbreaking product — and we’re looking for someone who shares our attention to detail and love of craft in building fun things. If you’ve been looking for something with actual creative freedom and real impact I promise you - this is it. General InterestWe’re always excited to meet talented, driven individuals who want to shape the future with us. If you don’t see an open role that matches your expertise but believe you’d be a great fit for our team, we’d love to hear from you!Submit your application and we’ll keep your details on file. When the right opportunity comes up, we’ll be sure to reach out.
No items found.
Hidden link
Liquid AI.jpg

Member of Technical Staff - ML Research Engineer, Foundation Model Data

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You want to play a critical role in our foundation model development process, focusing on consolidating, gathering, and generating high-quality text data for pretraining, midtraining, SFT, and preference optimizationRequired Experience:Experience Level: B.S. + 5 years experience or M.S. + 3 years experience or Ph.D. + 1 year of experienceDataset Engineering: Expertise in data curation, cleaning, augmentation, and synthetic data generation techniquesMachine Learning Expertise: Ability to write and debug models in popular ML frameworks, and experience working with LLMsSoftware Development: Strong programming skills in Python, with an emphasis on writing clean, maintainable, and scalable codeDesired Experience:M.S. or Ph.D. in Computer Science, Electrical Engineering, Math, or a related field.Experience fine-tuning or customizing LLMsFirst-author publications in top ML conferences (e.g. NeurIPS, ICML, ICLR).Contributions to popular open-source projectsWhat You'll Actually Do:Create and maintain data cleaning, filtering, selection pipeline than can handle >100TB of dataWatch out for the release of public dataset on huggingface and other platformsCreate crawlers to gather datasets from the web where public data is lackingWrite and maintain synthetic data generation pipelinesRun ablations to assess new dataset and judging pipelinesWhat You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI companyA collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMsAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Data Engineer
Data Science & Analytics
Hidden link
Hive.jpg

Analyst, Enterprise Analytics

Hive
-
US.svg
United States
Full-time
Remote
false
About Hive Hive is the leading provider of cloud-based AI solutions to understand, search, and generate content, and is trusted by hundreds of the world's largest and most innovative organizations. The company empowers developers with a portfolio of best-in-class, pre-trained AI models, serving billions of customer API requests every month. Hive also offers turnkey software applications powered by proprietary AI models and datasets, enabling breakthrough use cases across industries. Together, Hive’s solutions are transforming content moderation, brand protection, sponsorship measurement, context-based ad targeting, and more. Hive has raised over $120M in capital from leading investors, including General Catalyst, 8VC, Glynn Capital, Bain & Company, Visa Ventures, and others. We have over 250 employees globally in our San Francisco, Seattle, and Delhi offices. Please reach out if you are interested in joining the future of AI! Analyst, Enterprise Analytics We are looking for talented candidates to join our Hive Media team, a unit of the business focused on serving media companies, agencies, and advertisers with AI-powered products. As an Analyst on our Enterprise Analytics team, you will work closely with the Hive Media team to support delivery and growth of our media business. You will be expected to keep up with multiple projects at a time and apply your strong quantitative skills to analyze priorities, metrics, and client solutions and strategies. This is a great opportunity to be an integral part of a fast-growing startup, and you’ll be able to learn the ins and outs of how to support a team of innovative leaders who relentlessly pursue the best experience possible for all of our clients. ResponsibilitiesConduct internal and external analysis on Hive offerings to refine solutions and commercialization strategiesCommunicate outcomes of various projects and analytics to ensure growth and development of business goals Collaborate with machine learning and engineering teams in developing working solutions that benefit the client Enhance awareness in the targeted business community of Hive and our products / servicesMaintain awareness of industry best practices for data maintenance handling as it relates to your roleAdhere to policies, guidelines and procedures pertaining to the protection of information assetsReport actual or suspected security and/or policy violations/breaches to an appropriate authority Requirements You have a Bachelor's degreePreferred: You have 0-1 years of work experience Excellent written and verbal communication skillsYou have demonstrated success in a competitive environmentYou are highly self-motivated and ambitious in achieving goalsStrong team player, but can work and execute independentlyYou’re driven; no one needs to push you to excel; that’s just who you areYou are hungry to learn and you actively look for opportunities to contributeYou are highly organized and detail-oriented; you can handle multiple projects and dynamic priorities without missing a beat
Data Analyst
Data Science & Analytics
Hidden link
Mindrift.jpg

Freelance Ecology / Environment Science - AI Trainer

Mindrift
USD
0
0
-
50
US.svg
United States
Part-time
Remote
true
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. About the CompanyAt Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.What we doThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About the RoleGenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Ecology / Environment Science, you’ll have the opportunity to collaborate on these projects.Although every project is unique, you might typically: Generate prompts that challenge AI. Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. Correct the model’s responses based on your domain-specific knowledge. How to get startedSimply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.Requirements You have a Bachelor’s degree plus 6 years of relevant experience in Ecology, Environmental Science, or a related field. You hold a Master’s or PhD in Ecology, Environmental Science, or a related field, along with 3 years of relevant work experience. Your level of English is advanced (C1) or above. You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines. Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge. BenefitsWhy this freelance opportunity might be a great fit for you? Get paid for your expertise, with rates that can go up to $50/hour depending on your skills, experience, and project needs. Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments. Work on advanced AI projects and gain valuable experience that enhances your portfolio. Influence how future AI models understand and communicate in your field of expertise.
Machine Learning Engineer
Data Science & Analytics
Hidden link
Liquid AI.jpg

Member of Technical Staff - ML Inference Engineer, Pytorch

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience building large-scale production stacks for model serving.You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques.You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy.You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack.Desired Experience:PyTorchPythonModel-serving frameworks (e.g. TensorRT, vLLM, SGLang)What You'll Actually Do:Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs).Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference.Profile and robustify the stack for different batching and serving requirements.Build and scale pipelines for test-time compute.What You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI company.Deeper expertise in machine learning systems and efficient large model inference.Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models.A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.About Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
DevOps Engineer
Data Science & Analytics
Hidden link
Loop.jpg

Forward Deployed Architect - Support

Loop
USD
0
130000
-
150000
US.svg
United States
Full-time
Remote
false
About Loop Loop is on a mission to unlock profits trapped in the supply chain (https://loop.com/article/unlock-profit-trapped-in-your-supply-chain) and lower costs for consumers. Bad data and inefficient workflows create friction that limits working capital and raises costs for every supply chain stakeholder. Loop’s modern audit and pay platform uses our domain-driven AI to harness the complexity of supply chain data and documentation. We improve transportation spend visibility so companies can control their costs and power profit. That is why industry leaders like J.P. Morgan Chase, Great Dane, Emerge, and Loadsmart work with Loop. Our investors include J.P. Morgan, Index Ventures, Founders Fund, 8VC, Susa Ventures, Flexport, and 50 industry-leading angel investors. Our team brings subject matter expertise from companies like Uber, Google, Flexport, Meta, Samsara, Intuit, Rakuten, and long-standing industry leaders like C.H. Robinson. About the Role We are seeking a technically adept and proactive team member to join our Post-Production Support team as a Technical Support Architect. This role is critical in ensuring the ongoing stability, performance, and continuous improvement of deployed integration solutions between Loop and our clients’ TMS, FPA, WMS, YMS, ERP, and BI systems. The ideal candidate has hands-on experience with cloud architecture, integration methods (APIs, flat files, EDI), and development in Python or JavaScript. A strong problem-solving mindset, ability to triage and resolve technical issues, and a focus on optimizing support processes are essential. Key Responsibilities Own post-go-live technical support for all deployed integrations, ensuring system uptime, data integrity, and seamless operation between Loop and client systems (TMS, ERP, BI, etc.). Triage, diagnose, and resolve integration incidents and service requests—acting as the technical escalation point for complex issues reported by clients or internal teams. Monitor integration health using observability tools and dashboards; proactively identify and address performance bottlenecks, data syncing issues, or integration failures. Lead root cause analysis for major incidents; document findings, recommend preventive measures, and implement fixes to reduce recurrence. Develop and maintain runbooks, knowledge base articles, and support documentation to empower Tier 1/Tier 2 teams and improve resolution efficiency. Collaborate with Product and Engineering teams to communicate recurring issues, gaps, or enhancement requests—ensuring client feedback directly informs the product roadmap. Deliver targeted training and enablement for client teams and internal support staff on integration best practices, troubleshooting, and system usage. Manage the technical aspects of minor enhancements and patches post-launch, coordinating testing and release with minimal disruption to client operations. Participate in cross-functional incident management processes; lead technical bridge calls during critical outages and ensure transparent, timely client communication. Track and report key support metrics (e.g., MTTR, incident volume, client satisfaction); use data to identify trends and drive continuous improvement in support delivery. Maintain a strong understanding of industry standards and emerging technologies in logistics, ERP, and integration platforms to recommend relevant upgrades or innovations. Qualifications Hands-on experience with cloud-based integration development (Python, JavaScript) and modern DevOps practices. Deep familiarity AWS, AZURE, GCP, and Integration platforms Proven ability to troubleshoot complex integration issues across APIs, flat files, and EDI. Strong organizational skills to manage multiple concurrent support cases and prioritize based on business impact. Excellent communication to liaise between technical teams, clients, and business stakeholders. Proactive, customer-focused mindset with a commitment to operational excellence and continuous improvement. Success Measures Reduction in incident volume and resolution time for integration-related issues. High client satisfaction scores and low escalation rates. Effective knowledge transfer to Tier 1/Tier 2 support teams. Identification and implementation of process improvements that reduce support costs or enhance system reliability. Benefits & Perks Premium Medical, Dental, and Vision Insurance plans, premiums covered 100% for you 401k plan, FSA, Commuter benefits Unlimited PTO Generous professional development budget to feed your curiosity Physical and Mental fitness subsidies for yoga, meditation, gym, or ski membership Salary Range based on experience and skills 130,000 - 150,000 Why You Should Join Loop
Solutions Architect
Software Engineering
Hidden link
Liquid AI.jpg

Member of Technical Staff - Training Infrastructure Engineer

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have extensive experience building distributed training infrastructure for language and multimodal models, with hands-on expertise in frameworks like PyTorch Distributed, DeepSpeed, or Megatron-LMYou're passionate about solving complex systems challenges in large-scale model training—from efficient multimodal data loading to sophisticated sharding strategies to robust checkpointing mechanismsYou have a deep understanding of hardware accelerators and networking topologies, with the ability to optimize communication patterns for different parallelism strategiesYou're skilled at identifying and resolving performance bottlenecks in training pipelines, whether they occur in data loading, computation, or communication between nodesYou have experience working with diverse data types (text, images, video, audio) and can build data pipelines that handle heterogeneous inputs efficientlyDesired Experience:You've implemented custom sharding techniques (tensor/pipeline/data parallelism) to scale training across distributed GPU clusters of varying sizesYou have experience optimizing data pipelines for multimodal datasets with sophisticated preprocessing requirementsYou've built fault-tolerant checkpointing systems that can handle complex model states while minimizing training interruptionsYou've contributed to open-source training infrastructure projects or frameworksYou've designed training infrastructure that works efficiently for both parameter-efficient specialized models and massive multimodal systemsWhat You'll Actually Do:Design and implement high-performance, scalable training infrastructure that efficiently utilizes our GPU clusters for both specialized and large-scale multimodal modelsBuild robust data loading systems that eliminate I/O bottlenecks and enable training on diverse multimodal datasetsDevelop sophisticated checkpointing mechanisms that balance memory constraints with recovery needs across different model scalesOptimize communication patterns between nodes to minimize the overhead of distributed training for long-running experimentsCollaborate with ML engineers to implement new model architectures and training algorithms at scaleCreate monitoring and debugging tools to ensure training stability and resource efficiency across our infrastructureWhat You'll Gain:The opportunity to solve some of the hardest systems challenges in AI, working at the intersection of distributed systems and cutting-edge multimodal machine learningExperience building infrastructure that powers the next generation of foundation models across the full spectrum of model scalesThe satisfaction of seeing your work directly enable breakthroughs in model capabilities and performanceAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
DevOps Engineer
Data Science & Analytics
Machine Learning Engineer
Data Science & Analytics
Hidden link
Liquid AI.jpg

Member of Technical Staff - ML Research Engineer, Performance Optimization

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience writing high-performance, custom GPU kernels for training or inference.You have an understanding of low-level profiling tools and how to tune kernels with such tools.You have experience integrating GPU kernels into frameworks like PyTorch, bridging the gap between high-level models and low-level hardware performance.You have a solid understanding of memory hierarchy and have optimized for compute and memory-bound workloads.You have implemented fine-grain optimizations for a target hardware, e.g. targeting tensor cores.Desired Experience:CUDACUTLASSC/C++PyTorch/TritonWhat You'll Actually Do:Write high-performance GPU kernels for inference workloads.Optimize alternative architectures used at Liquid across all model parameter sizes.Implement the latest techniques and ideas from research into low-level GPU kernels.Continuously monitor, profile, and improve the performance of our inference pipelines.What You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI company.Deeper expertise in machine learning systems and performance optimization.Opportunity to bridge the gap between theoretical improvements in research and effective gains in practice.A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.About Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Hidden link
ada CX.jpg

Senior Software Engineer, Voice

Ada
USD
135000
-
170000
CA.svg
Canada
Full-time
Remote
true
About Us Ada is an AI customer service company whose mission is to make customer service extraordinary for everyone. We're driven to raise a new standard of quality customer service at scale, enabling enterprise companies to deliver experiences that people love–instant, proactive, personalized, and effortless. Ada is an AI transformation platform and partner—combining strategic expertise with powerful AI agent management technology to accelerate businesses’ AI maturity to keep them ahead of the curve. With Ada, 83% of customer conversations—and counting—are effortlessly resolved through automation, giving teams more time back, companies more resources to focus on growth, and customers more life to focus on what matters most to them. Established in 2016, Ada is a Canadian company that has powered over 5.5 billion interactions for leading brands like Square, YETI, Canva, and Monday.com, saving millions of hours of human effort. Backed with over $250M in funding from tier-one investors including Accel, Bessemer, FirstMark, Spark, and Version One Ventures, Ada is a pioneer in the management and application of AI in customer service. At Ada, we see growth as a reflection of each individual owner’s personal growth. That’s why our values are rooted in driving progress and continuous improvement. If you’re ambitious and eager to grow, Ada could be the place for you. Learn more at www.ada.cx.Engineering at Ada Everyone at Ada is an “Ada Owner”, this refers not just to the fact that you share in the equity, but also the expectation we have of you, and the agency you are given, to influence and shape our technical roadmap. True to our value of “Always Be Improving”, we are a team of folks who have the drive to grow themselves, and others around them, resulting in a challenging, but fulfilling environment.   With AI and LLMs central to our business, staying proficient and evolving alongside advancements in this field is crucial for success.  Our Role As a Senior Software Engineer on our Voice team, you will be working with internal and external stakeholders, product managers, and designers to execute Ada’s roadmap to deliver a best-in-class voice experience for our customers. Our team is focused on making it as easy as possible for Ada to be deployed in all the channels where our customers and their end users are having conversations. About You 5+ years of experience as a Software Engineer and proficient with backend technologies (with a strong preference for Python) Understanding of databases such as MongoDB, PostgreSQL, ElasticSearch and in-memory stores such as Redis Experience with 3rd party LLMs, such as GPT, Azure, Anthropic etc. A good understanding and direct experience with CCaaS, SIP, WebRTC, IVR, and Telephony ecosystems, including knowledge of VoIP protocols and contact centre technologies is a benefit. Experience deploying code and strong developer operations Experience working with public APIs to build reliable third-party integrations Collaborative leadership style to help the team grow and deliver high quality code Outcomes Execute on our ambitious product roadmap Provide your perspective and ideas to help level up our current development practices Review the team’s code, provide insightful feedback, foster collaborative community and teach everyone something new Ensure that the team is providing the best AI Agent platform experience to our internal developers and external partners Participate in an on-call rotation for the services the team owns, triaging and addressing production issues The expected salary range for this position is $135,000 - $170,000. Actual pay will be determined based on several factors such as past experience and qualifications, geographic location, and other job-related factors permitted by law.Benefits & Perks At Ada, you’ll not only build extraordinary products but also thrive in an environment designed for your success. We prioritize your well-being, growth, and work-life balance. Here’s what we offer: Benefits Unlimited Vacation: Recharge when you need to. Comprehensive Benefits: Extended health coverage, dental, vision, travel, and life insurance. Wellness Account: Empowering you to invest in your overall well-being and lifestyle. Employee & Family Assistance Plan: Resources to support you and your loved ones. Perks Flexible Work Schedule: Balance your work and personal life. Remote-First, In-Person Friendly: Options to work from home or at our local hub. Learning & Development Budget: Invest in your long-term growth goals and skills. Work from Home Budget: Equipping you with the tools and support for a seamless remote work experience. Access to Cutting-Edge AI Tools: Work with the best AI tech stack in the industry. Hands-On with LLMs: Enhance your expertise in leveraging large language models. A Thriving Industry: Join the forefront of innovation in AI, shaping the future of technology. The above Benefits and Perks only apply to full-time, permanent employees. Thank you for your interest in joining us at Ada. Due to the high volume of applications, we will only contact candidates whose qualifications match closely to the requirements of the position. We appreciate the time you have invested in learning more about us.
Software Engineer
Software Engineering
Hidden link
Helsing.jpg

Avionics Systems Engineer - Mission Systems

helsing
-
GE.svg
Germany
Remote
false
Who we are Helsing is a defence AI company. Our mission is to protect our democracies. We aim to achieve technological leadership, so that open societies can continue to make sovereign decisions and control their ethical standards.  As democracies, we believe we have a special responsibility to be thoughtful about the development and deployment of powerful technologies like AI. We take this responsibility seriously.  We are an ambitious and committed team of engineers, AI specialists and customer-facing programme managers. We are looking for mission-driven people to join our European teams – and apply their skills to solve the most complex and impactful problems. We embrace an open and transparent culture that welcomes healthy debates on the use of technology in defence, its benefits, and its ethical implications.  The role As an Avionics Systems Engineer for Mission Systems, you will be responsible for the development and delivery of mission computing hardware, working closely with industry partners. You will be responsible not only for defining the system, but also driving it through all development stages and ultimately onto the flying platform. In achieving this goal you will work with a diverse, high-calibre team of experts, strategists and partners. The day-to-day Lead the technical development of high performance Mission Computer Hardware in close collaboration with internal teams and partners Plan and lead the execution of the systems engineering lifecycle, from requirements development to qualification. Plan and lead the preparation of technical deliverables and documentation Plan test activities and support the development of test facilities Create the certification and compliance documentation and ensure adherence to aviation and regulatory standards throughout the development lifecycle Work closely with internal and external stakeholders Provide regular reporting on project status and technical changes to Chief Engineering and Programme Management You should apply if you Hold a relevant degree such as a Bachelor's or Master's in Aerospace Engineering, Electrical Engineering, Computer Science, or a related field Have led the design and development of Integrated Modular Avionics (IMA) computing platforms for airborne mission computers, including tasks related to systems engineering, certification, and avionics integration Are a subject matter expert in military open standards for avionics, for example, MOSA, SOSA, FACE and Pyramid and their application to hardware development Have a deep knowledge of the development of avionics complex HW & SW development and have experience in the application of DO-178C, DO-254 and DO-297 standards Have operational experience with mission systems is an advantage Possess excellent communication skills and the ability to report and present results clearly and effectively to both internal and external stakeholders. Note: We operate in an industry where women, as well as other minority groups, are systematically under-represented. We encourage you to apply even if you don’t meet all the listed qualifications; ability and impact cannot be summarised in a few bullet points. Join Helsing and work with world-leading experts in their fields  Helsing’s work is important. You’ll be directly contributing to the protection of democratic countries while balancing both ethical and geopolitical concerns The work is unique. We operate in a domain that has highly unusual technical requirements and constraints, and where robustness, safety, and ethical considerations are vital. You will face unique Engineering and AI challenges that make a meaningful impact in the world Our work frequently takes us right up to the state of the art in technical innovation, be it reinforcement learning, distributed systems, generative AI, or deployment infrastructure. The defence industry is entering the most exciting phase of the technological development curve. Advances in our field of world are not incremental: Helsing is part of, and often leading, historic leaps forward In our domain, success is a matter of order-of-magnitude improvements and novel capabilities. This means we take bets, aim high, and focus on big opportunities. Despite being a relatively young company, Helsing has already been selected for multiple significant government contracts We actively encourage healthy, proactive, and diverse debate internally about what we do and how we choose to do it. Teams and individual engineers are trusted (and encouraged) to practise responsible autonomy and critical thinking, and to focus on outcomes, not conformity. At Helsing you will have a say in how we (and you!) work, the opportunity to engage on what does and doesn’t work, and to take ownership of aspects of our culture that you care deeply about What we offer A focus on outcomes, not time-tracking Competitive compensation and stock options Relocation support Social and education allowances Regular company events and all-hands to bring together employees as one team across Europe A hands-on onboarding program (affectionately labelled “Infraduction”), in which you will be building tooling and applications to be used across the company. This is your opportunity to learn our tech stack, explore the company, and learn how we get things done - all whilst working with other engineering teams from day one    Helsing is an equal opportunities employer. We are committed to equal employment opportunity regardless of race, religion, sexual orientation, age, marital status, disability or gender identity. Please do not submit personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, data concerning your health, or data concerning your sexual orientation.  Helsing's Candidate Privacy and Confidentiality Regime can be found here.     
Software Engineer
Software Engineering
Hidden link
No job found
Your search did not match any job. Please try again
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.