Top Machine Learning Engineer Jobs Openings in 2025

Looking for opportunities in Machine Learning Engineer? This curated list features the latest Machine Learning Engineer job openings from AI-native companies. Whether you're an experienced professional or just entering the field, find roles that match your expertise, from startups to global tech leaders. Updated everyday.

krea_ai_logo

Machine Learning Engineer

Krea
-
US.svg
United States
Full-time
Remote
false
About KreaAt Krea, we are building next-generation AI creative tools.We are dedicated to making AI intuitive and controllable for creatives. Our mission is to build tools that empower human creativity, not replace it.We believe AI is a new medium that allows us to express ourselves through various formats—text, images, video, sound, and even 3D. We're building better, smarter, and more controllable tools to harness this medium. This jobWe're looking for a machine learning engineers who can work on large-scale image and video models training experiments..Some stuff you can do:Train foundation diffusion models for image and video generation.Train controllability modules such as IPAdapters or ControlNets.Develop novel research techniques and put them into production.Conducting large-scale experiments on high-performance computing clusters, optimizing data pipelines for massive image datasetsExample experience and skills we’re looking forProven track record in working with image or video models at scale (publications or open-source contributions a plus)Strong background in deep learning frameworks and distributed training paradigms.Ability to iterate rapidly, and propose creative research directionsA bit more about usWe’ve raised over $83M and are backed by world-class Silicon Valley investors such as Andreessen Horiwitz, and the cofounder of the Meta AI Research laboratory (FMK as Facebook AI Research) or founding members of OpenAI.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
metropolisio_logo

Director, Machine Learning

Metropolis
USD
0
210000
-
245000
US.svg
United States
Full-time
Remote
false
The Company   Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time.   The Role Metropolis is seeking a Director of Machine Learning to guide the research, development, and deployment of our portfolio of machine learning (ML) systems.  We are developing a broad suite of ML technologies that enable “AI for the Real World” including but not limited to computer vision systems, edge and IoT devices, business analysis tools and user facing applications.  In this role,  You will lead multiple ML teams spanning multiple disciplines, including data engineering and data science, as well as the ML algorithms and ML applications teams that power Orion, our computer vision platform.  As a senior leader within the Advanced Technologies Group (ATG), you will develop and execute on a robust machine learning roadmap that balances new initiatives with improvement of mission critical production systems.You will be expected to engage directly with the technical details of the team’s efforts, getting your hands dirty when needed, while also leading the team in a fast-paced, highly distributed environment. The right candidate will have a strong background in machine learning systems and ML team leadership.  You will have professional experience in a variety of machine learning domains, a deep understanding of the entire machine learning development lifecycle and have demonstrated experience taking ML systems from concept to production.  Teams you have led will have been high performing, included team members from multiple technical disciplines, and most importantly will have fostered an inclusive culture.  You must be able to thrive and succeed in an entrepreneurial environment, working collaboratively in a fast-paced environment with multiple stakeholders.  You won’t be afraid to break new technological ground at Metropolis and are more than willing to roll up your sleeves, dig in and get the job done.   Responsibilities Lead the Machine Learning teams in the research, design, development, testing and deployment of a variety of machine learning, computer vision, and autonomous systems. Guide the teams technically, engaging in troubleshooting, architecture definition and implementation of best practices Possess a broad skill set spanning data science, machine learning, and engineering, comfortably moving between disciplines to ensure well designed, supportable and scalable solutions are delivered Work closely with cross-functional leaders in Hardware, Edge/Platform, Data and Product engineering and vendors. Develop resourcing plans and coordinate across teams to ensure development schedules are aligned and communicated, and drive engineering efforts to completion. Invest in the career development of the team members, develop future leaders, and create a culture of cohesion and teamwork. Participate in talent acquisition processes to ensure that we have world class engineers across all skill and experience levels Establish metrics to measure the productivity of the team, hold people accountable and identify people issues early. Communicate ideas effectively, verbally and in writing, to a wide range of technical and non-technical audiences, including Metropolis leadership.   Qualifications  BS, MS or PhD in a relevant engineering discipline 10+ years of experience with at least 4+ years of experience leading and managing machine learning or related teams 2+ years of experience as a hands-on senior, staff or principal engineer before transitioning into managing teams. Experience as a technical lead or manager designing machine learning systems that have been deployed at scale. Demonstrated expertise in implementing and deploying machine learning algorithms related to computer vision, specifically in object detection/recognition, tracking algorithms, metric learning, and re-identification. Experience in deep learning frameworks such as TensorFlow/PyTorch/MxNet Experience with parallel computing, accelerator architectures including CUDA, CUDNN, TensorRT libraries Experience with large scale datasets, data pipelines, databases tools/libraries Experience deploying ML services to the cloud, including API development and design for scalability, performance reliability. Experience in modern software design, development, version control, refactoring, and testing Working knowledge of at least one modern programming language such as Python or C/C++ Knowledge of professional software engineering practices at the SDLC. Demonstrated project management skills to ensure timely delivery of features, while maintaining high quality products Demonstrated track record of developing engineers through various careers stages and building high-performance teams. Excellent written and verbal communication skills with a proven ability to present complex technical information in a clear and concise manner to a variety of audiences. Previous experience working inside innovative, high growth environments. Role is hybrid on-site in either Seattle, WA or Santa Monica, CA.   When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. The anticipated base salary for this position is $210,000.00 to $245,000.00 annually. The actual base salary offered is determined by a number of variables, including, as appropriate, the applicant's qualifications for the position, years of relevant experience, distinctive skills, level of education attained, certifications or other professional licenses held, and the location of residence and/or place of employment. Base salary is one component of Metropolis’s total compensation package, which may also include access to or eligibility for healthcare benefits, a 401(k) plan, short-term and long-term disability coverage, basic life insurance, a lucrative stock option plan, bonus plans and more.  #LI-NM1 #LI-Onsite Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.  
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Head of Engineering
Software Engineering
Apply
Hidden link
scaleai_logo

Applied AI Engineering Manager, Enterprise

Scale AI
USD
0
212000
-
254400
US.svg
United States
Full-time
Remote
false
AI is becoming vitally important in every function of our society. At Scale, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we’re accelerating the usage of frontier data and models by building complex agents for enterprises around the world through our Scale Generative AI Platform (SGP). The SGP ML team works on the front lines of this AI revolution. We interface directly with clients to build cutting edge products using the arsenal of proprietary research and resources developed at Scale. As an AAI Engineering Manager, you’ll manage a team of high-calibre Applied AI Engineers + MLEs who work with clients to train ML models to satisfy their business needs. Your team’s work will range from training next-generation AI cybersecurity firewall LLMs to training foundation agentic action models making predictions about business-saving outcomes. You will guide your  team towards using data-driven experiments to provide key insights around model strengths and inefficiencies in an effort to improve products. If you are excited about shaping the future of the modern AI movement, we would love to hear from you! You will:  Train state of the art models, developed both internally and from the community, in production to solve problems for our enterprise customers.  Manage a team of 5+ Applied AI Engineers / ML Engineers Work with product and research teams to identify opportunities for ongoing and upcoming services. Explore approaches that integrate human feedback and assisted evaluation into existing product lines.  Create state of the art techniques to integrate tool-calling into production-serving LLMs. Work closely with customers - some of the most sophisticated ML organizations in the world - to quickly prototype and build new deep learning models targeted at multi-modal content understanding problems. Ideally you’d have: At least 3 years of model training, deployment and maintenance experience in a production environment At least 1-2 years of management or tech leadership experience Strong skills in NLP, LLMs and deep learning  Solid background in algorithms, data structures, and object-oriented programming Experience working with a cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment Experience building products with LLMs including knowing the ins and outs of evaluation, experimentation, and designing solutions to get the most of the models PhD or Masters in Computer Science or a related field Nice to haves: Experience in dealing with large scale AI problems, ideally in the generative-AI field Demonstrated expertise in large vision-language models for diverse real-world applications, e.g. classification, detection, question-answering, etc.  Published research in areas of machine learning at major conferences (NeurIPS, ICML, EMNLP, CVPR, etc.) and/or journals Strong high-level programming skills (e.g., Python), frameworks and tools such as DeepSpeed, Pytorch lightning, kubeflow, TensorFlow, etc.  Strong written and verbal communication skills to operate in a cross functional team environment Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$212,000—$254,400 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI.  Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - ML Research Engineer; Multi-Modal - Vision

Liquid AI
-
US.svg
United States
Full-time
Remote
false
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience with machine learning at scaleYou’re proficient in PyTorch, and familiar with distributed training frameworks like DeepSpeed, FSDP, or Megatron-LMYou’ve worked with multimodal data (e.g., image-text, video, visual documents, audio)You’ve contributed to research papers, open-source projects, or production-grade multimodal model systemsYou understand how data quality, augmentations, and preprocessing pipelines can significantly impact model performance—and you’ve built tooling to support thatYou enjoy working in interdisciplinary teams across research, systems, and infrastructure, and can translate ideas into high-impact implementationsDesired Experience:You’ve designed and trained Vision Language ModelsYou care deeply about empirical performance, and know how to design, run, and debug large-scale training experiments on distributed GPU clustersYou’ve developed vision encoders or integrated them into language pretraining pipelines with autoregressive or generative objectivesYou have experience working with large-scale video or document datasets, understand the unique challenges they pose, and can manage massive datasets effectivelyYou’ve built tools for data deduplication, image-text alignment, or vision tokenizer developmentWhat You'll Actually Do:Investigate and prototype new model architectures that optimize inference speed, including on edge devicesLead or contribute to ablation studies and benchmark evaluations that inform architecture and data decisionsBuild and maintain evaluation suites for multimodal performance across a range of public and internal tasksCollaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large vision-language datasetsWork with the infrastructure team to optimize model training across large-scale GPU clustersContribute to publications, internal research documents, and thought leadership within the team and the broader ML communityCollaborate with the applied research and business teams on client-specific use casesWhat You'll Gain:A front-row seat in building some of the most capable Vision Language ModelsAccess to world-class infrastructure, a fast-moving research team, and deep collaboration across ML, systems, and productThe opportunity to shape multimodal foundation model research with both scientific rigor and real-world impactAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Research Scientist
Product & Operations
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - Applied ML Engineer

Liquid AI
-
US.svg
United States
Full-time
Remote
false
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have hands-on experience optimizing and deploying local LLMs - running models like Llama, Mistral or other open-source LLMs locally through tools like vLLM, Ollama or LM StudioYou're passionate about customizing ML models to solve real customer problems - from fine-tuning foundation models to optimizing them for specific use cases, you know how to make models work for unique requirementsYou have a knack for lightweight ML deployment and can architect solutions that work efficiently in resource-constrained environments - whether that's optimizing inference on CPUs, working with limited memory budgets, or deploying to edge devicesYou have a sharp eye for data quality and know what makes data effective - able to spot ineffective patterns in sample data, help design targeted synthetic datasets, and craft prompts that unlock the full potential of foundation models for specific use casesDesired Experience:You have customized an existing product for a customerYou're versatile across deployment scenarios - whether it's containerized cloud deployments, on-premise installations with strict security requirements, or optimized edge inference, you can make models work anywhereWhat You'll Actually Do:Own the complete deployment journey - from model customization to serving infrastructure, ensuring our solutions work flawlessly in variable customer environmentsDeploy AI systems to solve use cases others can not - implementing solutions that push beyond base LFMs can deliver and redefine what's possible with our technologyWork alongside our core engineering team to leverage and enhance our powerful toolkit of Liquid infrastructureWhat You'll Gain:The ability to shape how the world's most influential organizations adopt and deploy LFMs - you'll be hands-on building solutions for customers who are reimagining entire industriesOwn the complete journey of delivering ML solutions that matter - from model customization to deployment architecture to seeing your work drive real customer impactAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
voize_de_logo

ML Engineer - Speech (m/f/d)

Voize
-
GE.svg
Germany
Full-time
Remote
true
🎤 Why voize? Because we’re more than just a job!At voize, we’re revolutionizing the healthcare industry with AI: Nurses simply speak their documentation into their smartphones, and our AI automatically generates the correct entries. This saves each nurse an average of 39 minutes per day, improves the quality of documentation, and makes their daily work much more rewarding.voize is YCombinator-funded, already in use at over 600 senior care homes, and has grown by 100% in the last 90 days. Our customers save over 3.5 million hours annually – time spent on people instead of paperwork.But this is just the beginning. With our self-developed voize AI, we’re transforming not only the healthcare industry, but also have the potential to create value in many other sectors – from healthcare to inspections.💡 Your Mission:If you're a Machine Learning Engineer experienced with Speech recognition and are excited to work at the cutting edge of product design, applied ML research, and MLOps, then go ahead and apply! With us, you'll build products with direct user feedback, train AI models with real data, and ship new features to production every day.🤝 Your Skillset – What you bring to the tableSeveral years hands-on experience in Deep Learning for speech recognition, including developing and optimising ASR-systems (not just academic research)Excellent foundation in STT (Speech-to-Text) system development with a focus on real-world applicationsExperience with owning the ML process end-to-end: from concept and exploration, to model productionization, maintenance, monitoring and optimizationShipped ML models to production with Python and PyTorchTrained new models from scratch and not just fine-tuning existing ones🚀 Your Daily Business – No two days are alikeTake ownership for the design, training, evaluation, and deployment of our deep learning models in the space of speech recognitionThe models you build and refine are at the heart of our applications and directly impact the end-userYou'll get to engineer large self-supervised trainings as well as fast inference for mobile devices and hosted environments🎯 Our Success Mindset – How we work at voizeResilience is one of your strengths – you see challenges as opportunities, not obstaclesIterative working suits you – you test, learn, and improve constantly instead of waiting for perfectionCommunication & feedback come naturally to you – you openly address issues and both give and receive constructive feedback🌱 Growing together – what you can expect at voizeBecome a co-creator of our success with virtual stock optionsOur office is in Berlin, and we offer remote workWe provide flexible working hours because you know best when you work most efficiently!Access to various learning platforms (e.g., Blinkist, Audible, etc.)We have an open culture and organize regular work weeks and team events to collaborate and bondWe are a fast-growing startup, so you'll encounter various challenges, providing the perfect foundation for rapid personal growthYour work will make a real impact, helping alleviate the workload for healthcare professionalsFree Germany Ticket and Urban Sports Club membership30 days of vacation – plus your birthday off✨ Ready to talk? Apply now! 🚀We look forward to your application and can’t wait to meet you – no matter who you are or what background you have!
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - ML Research Engineer, Foundation Model Data

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You want to play a critical role in our foundation model development process, focusing on consolidating, gathering, and generating high-quality text data for pretraining, midtraining, SFT, and preference optimizationRequired Experience:Experience Level: B.S. + 5 years experience or M.S. + 3 years experience or Ph.D. + 1 year of experienceDataset Engineering: Expertise in data curation, cleaning, augmentation, and synthetic data generation techniquesMachine Learning Expertise: Ability to write and debug models in popular ML frameworks, and experience working with LLMsSoftware Development: Strong programming skills in Python, with an emphasis on writing clean, maintainable, and scalable codeDesired Experience:M.S. or Ph.D. in Computer Science, Electrical Engineering, Math, or a related field.Experience fine-tuning or customizing LLMsFirst-author publications in top ML conferences (e.g. NeurIPS, ICML, ICLR).Contributions to popular open-source projectsWhat You'll Actually Do:Create and maintain data cleaning, filtering, selection pipeline than can handle >100TB of dataWatch out for the release of public dataset on huggingface and other platformsCreate crawlers to gather datasets from the web where public data is lackingWrite and maintain synthetic data generation pipelinesRun ablations to assess new dataset and judging pipelinesWhat You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI companyA collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMsAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Data Engineer
Data Science & Analytics
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - Edge Inference Engineer

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You are a highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architecturesProficiency in building and enhancing edge inference stacks is essentialStrong ML Experience: Proficiency in Python and PyTorch to effectively interface with the ML team at a deeply technical levelHardware Awareness: Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performanceProficient in Coding: Expertise in Python, C++, or Rust for AI-driven real-time embedded systemsOptimization of Low-Level Primitives: Responsible for optimizing core primitives to ensure efficient model executionSelf-Guided and Ownership: Ability to independently take a PyTorch model and inference requirements and deliver a fully optimized edge inference stack with minimal guidanceDesired Experience:Experience with mobile development and in cache-aware algorithms will be highly valuedWhat You'll Actually Do:Optimize inference stacks tailored to each platform as we prepare to deploy our models across various edge device types, including CPUs, embedded GPUs, and NPUsTake our models, dive deep into the task, and return with a highly optimized inference stack, leveraging existing frameworks like llama.cpp, Executorch, and TensorRT to deliver exceptional throughput and low latencyWhat You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI companyA collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMsAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
mindrift_ai_logo

Freelance Ecology / Environment Science - AI Trainer

Mindrift
USD
0
0
-
50
US.svg
United States
Part-time
Remote
true
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. About the CompanyAt Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.What we doThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About the RoleGenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Ecology / Environment Science, you’ll have the opportunity to collaborate on these projects.Although every project is unique, you might typically: Generate prompts that challenge AI. Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. Correct the model’s responses based on your domain-specific knowledge. How to get startedSimply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.Requirements You have a Bachelor’s degree plus 6 years of relevant experience in Ecology, Environmental Science, or a related field. You hold a Master’s or PhD in Ecology, Environmental Science, or a related field, along with 3 years of relevant work experience. Your level of English is advanced (C1) or above. You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines. Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge. BenefitsWhy this freelance opportunity might be a great fit for you? Get paid for your expertise, with rates that can go up to $50/hour depending on your skills, experience, and project needs. Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments. Work on advanced AI projects and gain valuable experience that enhances your portfolio. Influence how future AI models understand and communicate in your field of expertise.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
mindrift_ai_logo

Freelance Ecology / Environment Science - AI Trainer

Mindrift
USD
0
0
-
50
US.svg
United States
Part-time
Remote
true
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. About the CompanyAt Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.What we doThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About the RoleGenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Ecology / Environment Science, you’ll have the opportunity to collaborate on these projects.Although every project is unique, you might typically: Generate prompts that challenge AI. Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. Correct the model’s responses based on your domain-specific knowledge. How to get startedSimply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.Requirements You have a Bachelor’s degree plus 6 years of relevant experience in Ecology, Environmental Science, or a related field. You hold a Master’s or PhD in Ecology, Environmental Science, or a related field, along with 3 years of relevant work experience. Your level of English is advanced (C1) or above. You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines. Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge. BenefitsWhy this freelance opportunity might be a great fit for you? Get paid for your expertise, with rates that can go up to $50/hour depending on your skills, experience, and project needs. Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments. Work on advanced AI projects and gain valuable experience that enhances your portfolio. Influence how future AI models understand and communicate in your field of expertise.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
mindrift_ai_logo

Freelance Ecology / Environment Science - AI Trainer

Mindrift
0
0
-
0
US.svg
United States
Part-time
Remote
true
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. About the CompanyAt Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.What we doThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About the RoleGenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Ecology / Environment Science, you’ll have the opportunity to collaborate on these projects.Although every project is unique, you might typically: Generate prompts that challenge AI. Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. Correct the model’s responses based on your domain-specific knowledge. How to get startedSimply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.Requirements You have a Bachelor’s degree plus 6 years of relevant experience in Ecology, Environmental Science, or a related field. You hold a Master’s or PhD in Ecology, Environmental Science, or a related field, along with 3 years of relevant work experience. Your level of English is advanced (C1) or above. You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines. Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge. BenefitsWhy this freelance opportunity might be a great fit for you? Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments. Work on advanced AI projects and gain valuable experience that enhances your portfolio. Influence how future AI models understand and communicate in your field of expertise.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - ML Inference Engineer, Pytorch

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience building large-scale production stacks for model serving.You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques.You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy.You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack.Desired Experience:PyTorchPythonModel-serving frameworks (e.g. TensorRT, vLLM, SGLang)What You'll Actually Do:Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs).Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference.Profile and robustify the stack for different batching and serving requirements.Build and scale pipelines for test-time compute.What You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI company.Deeper expertise in machine learning systems and efficient large model inference.Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models.A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.About Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
MLOps / DevOps Engineer
Data Science & Analytics
Apply
Hidden link
liquid_ai_inc_logo

Member of Technical Staff - ML Research Engineer, Performance Optimization

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience writing high-performance, custom GPU kernels for training or inference.You have an understanding of low-level profiling tools and how to tune kernels with such tools.You have experience integrating GPU kernels into frameworks like PyTorch, bridging the gap between high-level models and low-level hardware performance.You have a solid understanding of memory hierarchy and have optimized for compute and memory-bound workloads.You have implemented fine-grain optimizations for a target hardware, e.g. targeting tensor cores.Desired Experience:CUDACUTLASSC/C++PyTorch/TritonWhat You'll Actually Do:Write high-performance GPU kernels for inference workloads.Optimize alternative architectures used at Liquid across all model parameter sizes.Implement the latest techniques and ideas from research into low-level GPU kernels.Continuously monitor, profile, and improve the performance of our inference pipelines.What You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI company.Deeper expertise in machine learning systems and performance optimization.Opportunity to bridge the gap between theoretical improvements in research and effective gains in practice.A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.About Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
anthropicresearch_logo

Anthropic AI Safety Fellow, UK

Anthropic
GBP
67600
67600
-
67600
GB.svg
United Kingdom
Contractor
Remote
true
About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.Note: this is our UK job posting. You can find our US and Canada job postings on our careers page. Please apply by August 17! Responsibilities: The Anthropic Fellows Program is an external collaboration program focused on accelerating progress in AI safety research by providing promising talent with an opportunity to gain research experience. The program will run for about 2 months, with the possibility of extension for another 4 months, based on how well the collaboration is going. Our goal is to bridge the gap between industry engineering expertise and the research skills needed for impactful work in AI safety. Fellows will use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). Fellows will receive substantial support - including mentorship from Anthropic researchers, funding, compute resources, and access to a shared workspace - enabling them to develop the skills to contribute meaningfully to critical AI safety research. We aim to onboard our next cohort of Fellows in October 2025, with later start dates being possible as well. What To Expect Direct mentorship from Anthropic researchers Connection to the broader AI safety research community Weekly stipend of 1300 GBP & access to benefits (benefits vary by country but include medical, dental, and vision insurance) Funding for compute and other research expenses Shared workspaces in Berkeley, California and London, UK This role will be employed by our third-party talent partner, and may be eligible for benefits through the employer of record. Mentors & Research Areas Fellows will undergo a project selection & mentor matching process. Potential mentors include Ethan Perez Jan Leike Emmanuel Ameisen Jascha Sohl-Dickstein Sara Price Samuel Marks Joe Benton Akbir Khan Fabien Roger Alex Tamkin Nina Panickssery Collin Burns Jack Lindsey Trenton Bricken Evan Hubinger Our mentors will lead projects in select AI safety research areas, such as: Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.  Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios. Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise. Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures. AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations. For a full list of representative projects for each area, please see these blog posts: Introducing the Anthropic Fellows Program for AI Safety Research, Recommendations for Technical AI Safety Research Directions. You may be a good fit if you: Are motivated by reducing catastrophic risks from advanced AI systems Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic Please note: We do not guarantee that we will make any full-time offers to Fellows. However, strong performance during the program may indicate that a Fellow would be a good fit here at Anthropic, and external collaborations have historically provided our teams with substantial evidence that someone might be a good hire. Have a strong technical background in computer science, mathematics, physics, or related fields Have strong programming skills, particularly in Python and machine learning frameworks Can work full-time on the fellowship for at least 2 months, and ideally 6 months Have or can obtain US, UK, or Canadian work authorisation, and are able to work full-time out of Berkeley or London (or remotely if in Canada). While we are not able to sponsor visas, we are able to support Fellows on F-1 visas who are eligible for full-time OPT/CPT. Are comfortable programming in Python Thrive in fast-paced, collaborative environments Can execute projects independently while incorporating feedback on research direction We’re open to all experience levels and backgrounds that meet the above criteria – you do not, for example, need prior experience with AI safety or ML. We particularly encourage applications from underrepresented groups in tech. Strong candidates may also have: Experience with empirical ML research projects Experience working with Large Language Models Experience in one of the research areas (e.g. Interpretability) Experience with deep learning frameworks and experiment management Track record of open-source contributions Candidates need not have: 100% of the skills needed to perform the job Formal certifications or education credentials Interview process: We aim to onboard our next cohort of Fellows in October 2025, with the possibility of later start dates for some fellows. Please note that if you are accepted into the October cohort, we expect that you will be available for several hours of mentor matching in October, although you may start the full-time program later. To ensure we can start onboarding Fellows in October 2025, we will complete interviews on a rolling basis until August 17, after which we will conduct interviews at specific timeslots on pre-specified days. We will also set hard cut-off dates for each stage - if you are not able to make that stage’s deadline, we unfortunately will not be able to proceed with your candidacy.  We've outlined the interviewing process below, but this may be subject to change. Initial Application and References  Submit your application below by August 17! In the application, we’ll also ask you to provide references who can speak to what it’s like to work with you. Technical Assessment You will complete a 90-minute coding screen in Python As a quick note - we know most auto-screens are pretty bad. We think this one is unusually good and for some teams, give as much signal as an interview. It’s a bunch of reasonably straightforward coding that involves refactoring and adapting to new requirements, without any highly artificial scenarios or cliched algorithms you’d gain an advantage by having memorized. We'll simultaneously collect written feedback from your references during this stage. Technical Interview You'll schedule time to do a coding-based technical interview that does not involve any machine learning (55 minutes) Final interviews   The final interviews consist of two interviews: Research Discussion (15 minutes) – Brainstorming session with an Alignment Science team lead to explore research ideas and approaches Take-Home Project (5 hours work period + 30 minute review) – Research-focused project that demonstrates your technical and analytical abilities In parallel, we will conduct reference calls. Offer decisions We aim to extend all offers by early October, and finalize our cohort shortly after. We will extend offers on a rolling basis and set an offer deadline of 1 week. However, if you need more time for the offer decision, please feel free to ask for it! After we select our initial cohort, we will kick off mentor matching and project selection in mid/late-October (the first week of the program). This will involve several project discussion sessions and follow-up discussions. We'll extend decisions about extensions in mid-December. Extended fellowships will end in mid/late-April. Compensation (GBP): This role is not a full-time role with Anthropic, and will be hired via our third-party talent partner. The expected base pay for this role is £1,300/week, with an expectation of 40 hours per week.  Role-Specific Location Policy: While we currently expect all staff to be in one of our offices at least 25% of the time, this role is exempt from that policy and can be done remotely from anywhere in the UK. However, we strongly prefer candidates who can be based in London and make use of the shared workspace we've secured for our Fellows. Please note: The logistics below this section does not apply to this job posting (for example, we are not able to sponsor visas for Fellows).The expected salary range for this position is:Annual Salary:£67,600—£67,600 GBPLogistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
Hidden link
woebot_health_logo

Lead AI Engineer

Woebot Health
USD
0
180000
-
225000
US.svg
United States
Full-time
Remote
false
About Woebot HealthWe’re a mission-driven startup reinventing the way people find peace and inspiration through dazzling digital experiences. Our team blends engineering, design, and product minds with a shared passion for building human-centered technology. Using the latest advances in language models, real-time interactions, and conversational design, we’re creating a next-generation digital companion that helps people feel seen, supported, and empowered - wherever they are, whenever they need it. With backing from top-tier investors and a bold vision for the future, we’re moving fast, learning constantly, and designing for real-world impact at scale.We believe the future of generative AI lies not in novelty, but in precision, nuance, and emotional resonance. If you’re driven by solving hard problems in applied AI and want to shape systems that genuinely serve people, we’d love to meet you.Why Join Us?Our company was founded by Alison Darcy, a clinical research psychologist and digital health entrepreneur. Among our earliest and most strategic backers is AI pioneer Andrew Ng, founder of AI Fund and deeplearning.ai. Backed by top-tier investors and powered by a hands-on team of passionate creators, we’re turning bold ideas into transformative digital care. This role will report to and collaborate closely with our founder, shaping the architecture and capabilities of our AI companion from the ground up. Joining our small but mighty core team means making an outsized and immediate impact on people’s wellness.The RoleOur LeadAI Engineer will help architect and evolve the “cognitive engine” of our platform. This role sits at the intersection of model tuning, inference optimization, and intelligent orchestration, focused on building adaptive systems that can shift modes fluidly depending on user needs.You’ll partner closely with our psychology, product, and infrastructure teams to:Lead fine-tuning of foundational models using efficient training techniques and custom datasetsDesign and implement model orchestration logic that determines when to retrieve, route, generate, or escalate across different conversational contextsBuild and iterate on eval frameworks for long-form, multi-turn interactions - prioritizing emotional coherence and user outcomes over token accuracyStay on top of rapid developments in LLMs, fine-tuning frameworks, and inference efficiency, translating that knowledge into actionChampion best practices for scaling training workflows, experimenting safely, and continuously learning from real-world feedbackWhile the company currently operates in a remote environment, we aspire to build a hybrid presence in the Bay Area, which may require relocation down the line. While we may prioritize talent based in the Bay Area, we are open to hiring top talent anywhere in the US.What You’ll DoAs a LeadAI Engineer, you will be responsible for architecting, optimizing, and evolving the “cognitive engine” of our AI companion. Your work will combine deep model training expertise with real-world experimentation, striking a balance between precision, nuance, and adaptability. You’ll collaborate across AI, product, and design to translate emotional and behavioral intent into reliable, scalable machine intelligence, and help define our technical roadmap in collaboration with the founder & engineering team.You will:Prompt Engineering & OptimizationDesign, iterate, and evaluate prompt strategies for complex multi-turn interactions using frameworks like DSPyBuild prompt libraries and A/B test variants to optimize for safety, clarity, and on-brand responsivenessLeverage prompt engineering as a short-term strategy where fine-tuning is not yet appropriate, with a clear view on trade-offAgentic Reasoning & OrchestrationEvaluate and integrate modular orchestration strategies (e.g., LangGraph, LlamaIndex, Letta, PydanticAI), forming a perspective on their relevance and scalabilityDesign systems that can switch between reflection, coaching, or directive states based on context, using either routing logic or learned behaviorCollaborate with the product team to define how tools, memory, and reasoning modules interact without overcomplicating the user experienceModel Fine-Tuning & OptimizationOwn parameter-efficient fine-tuning pipelines (e.g., LoRA, QLoRA) to adapt foundational models to brand-specific voice, tone, and emotional rangeCurate high-quality datasets and design eval metrics tailored to coherence, empathy, and state consistency across sessionsExplore model compression, quantization, and inference optimization for low-latency voice and mobile interactionsExperimental Thinking & EvaluationDesign lightweight experiments to validate technical approaches and measure outcomes beyond accuracy (e.g., trust, emotional congruence)Partner with domain experts to implement human-in-the-loop annotation systems where automation falls shortShip prototypes and production features rapidly, with a build-learn-refine approachWhat We’re Looking ForDeep Technical Fluency5+ years in AI/ML engineering, with at least 2 years of hands-on fine-tuning large language modelsDemonstrated expertise in applied AI within early-stage startups or product teams - you can speak to the lived experience of leading teams and projects through rapid growth and production.Strong understanding of the trade-offs between fine-tuning, tool invocation, prompt orchestration, and hybrid approachesProficiency with model training workflows, scalable data pipelines, and LLM evaluation techniquesPractical experience with low-latency inference environments and model optimization strategies (quantization, compression, routing logic)Comfortable in Python and modern ML tooling; experienced deploying models to production environmentsProduct-Oriented ThinkerAbility to translate product or psychological intent into model architecture or training strategyPrior work in behavioral health, mental wellness or adjacent domains, demonstrating sensitivity to emotionally resonant interactions.Experimental mindset with a bias toward measurable learning and iterative improvementStrong communicator who can explain the “why” behind the “how” to technical and non-technical partners alikeDemonstrated curiosity for emerging methods and a track record of staying current on deep learning advancementsCollaborative, Bold & HumbleTeam-first engineer who values listening as much as leading, who can mentor engineers in best practices while staying open to feedbackComfortable with challenging ideas while seeking the best solution, not creditMotivated by impact and aligned to our mission of building AI that helps peopleAdaptable to small-team dynamics and comfortable operating as the technical leader in a flat team structure, collaborating closely with product & engineering teams.Bonus if You HaveExperience at AI-first companiesExperience building products in the behavioral health or digital wellness space.Knowledge of conversational state management, memory systems, or emotional alignment in LLMsExposure to orchestrated AI frameworks or modular agentic architecturesWhat We OfferCompensation: $180,000 - $225,000 base + equityBenefits: Medical, dental, and vision for you and your family Time Off: Flexible PTO and mental wellness supportLearning Stipend: Annual budget for courses, conferences, or certificationsRemote Support: One-time home office setup stipendSecurity: Company-sponsored life and disability insurance + 401(k)We are committed to fostering an inclusive and equitable workplace. Compensation decisions are based on a variety of factors, including location, skills, experience, and various market benchmarks.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
anthropicresearch_logo

Applied AI, Product Engineer, UK Public Sector

Anthropic
GBP
0
160000
-
240000
GB.svg
United Kingdom
Full-time
Remote
false
About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a member of the Applied AI team at Anthropic, you will drive the adoption of frontier AI by developing bespoke LLM solutions within the UK Government space. You’ll leverage your customer-facing experience and technical skills to architect innovative solutions that address our customers' mission needs, meet their technical requirements, and provide a high degree of reliability and safety. In collaboration with the Sales, Product, and Engineering teams, you’ll partner with UKG partners to build leading-edge AI systems. You will employ your excellent communication skills to explain and demonstrate complex solutions persuasively to technical and non-technical audiences alike. You will play a critical role in identifying opportunities to innovate and differentiate our AI systems, while maintaining our best-in-class safety standards. Responsibilities: Act as the primary technical advisor for prospective UKG customers evaluating Claude. Demonstrate how Claude can address the customer's use cases through proofs of concept. Provide technical guidance on integration, deployment, and adoption best practices. Partner closely with account executives to understand customer requirements. Develop customized pilots and prototypes, as well as evaluation suites to make the case for customer adoption.  Drive technical decision making by partnering on optimal setup, architecture, and integration of Claude into the customer's existing infrastructure. Demonstrate solutions to technical roadblocks. Act as the voice of our customers and a key collaborator with our Product and Research teams to ensure we are delivering critical capabilities to UKG.  Travel to customer sites for senior leader meetings, AI implementation, technical enablement, and building relationships. Establish a shared vision for creating solutions that enable beneficial and safe AI Lead the vision, strategy, and execution of innovative solutions that leverage our latest models’ capabilities. You may be a good fit if you have: Active DV security clearance OR experience with UK civil government  2+ years of experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect, or Platform Engineer within the Public Sector space Designed novel and innovative solutions for technical platforms in a developing mission area Strong technical aptitude to partner with engineers and strong proficiency in at least one programming language (Python preferred) Understanding of and experience with LLM fundamentals The ability to navigate and execute amidst ambiguity, and to flex into different domains based on the business problem at hand, finding simple, easy-to-understand solutions Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities A love of teaching, mentoring, and helping others succeed Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is:Annual Salary:£160,000—£240,000 GBPLogistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Machine Learning Engineer
Data Science & Analytics
Solutions Architect
Software Engineering
Apply
Hidden link
1708519130300

AI/ML Engineer

Brain Co
USD
155000
-
265500
US.svg
United States
Full-time
Remote
true
About Brain Co.Brain Co. is at the leading edge of productionizing artificial intelligence for real world problems, propelling advancements and shaping the future with pioneering work. Our mission is to ensure that the transformative powers and automation of AI are applied to real world problems in various governments where manual work is still dominant. We are seeking visionary AI/ML Engineers with experience in GenAI and ML modeling, where you will convert groundbreaking research into real-world applications that revolutionize industries, boost human creativity, and address complex challenges.About The RoleAs an AI/ML Engineer at Brain Co., you will play a crucial role in deploying state-of-the-art models to automate various real world problems in sectors such as healthcare, government and energy. Part of the role will involve turning research breakthroughs into practical solutions for various nation states. This role is your opportunity to make a significant impact by making AI technology both accessible and influential.In This Role, You Will:Innovate and Deploy: Design and deploy advanced LLM models to tackle real-world problems, particularly in automating complex, manual processes in a range of real-world verticals.Optimize and Scale: Build scalable data pipelines, optimize models for performance and accuracy, and prepare them for production. Monitor and maintain deployed models to ensure they continue delivering value across various governments worldwide.Make a Difference: Engage in projects including but not limited to optimizing the world's most advanced energy production systems, modernizing core government workflows, or improving patient outcomes in advanced public healthcare systems. Your work will directly impact how AI benefits individuals, businesses, and society at large.Engage with Leaders: interact directly with government officials in various countries and apply the first of its kind AI solutions while working alongside experienced ex. Founders, AI researchers, and software engineers to understand complex business challenges and deliver AI-powered solutions. Join a dynamic team where ideas are exchanged freely and creativity flourishes. You will be able to wear many hats: software building, product management, sales, interpersonal skills.Learn and Lead: Keep abreast of the latest developments in machine learning and AI. Participate in code reviews, share knowledge, and set an example with high-quality engineering practices.You Might Thrive In This Role If You:Hold a BSc/Master’s/PhD degree in Computer Science, Machine Learning, Data Science, or a related field.Have experience building GenAI-focused applications with the latest technologies, including but not limited to Agents, reasoning models and RAG.Have at least a high level familiarity with the architecture and operation of large language models.Have personally implemented models in common ML frameworks such as PyTorch, Jax or TensorFlow.Possess a strong foundation in data structures, algorithms, and software engineering principles.Exhibit excellent problem-solving and analytical skills, with a proactive approach to challenges.Can work collaboratively with cross-functional teams.Thrive in fast-paced environments where priorities or deadlines may compete.Are eager to own problems end-to-end and willing to acquire any necessary knowledge to get the job done.BenefitsCompetitive salary plus equityDaily lunchesCommuter benefits401(k)Medical, Dental and VisionUnlimited PTOCompensationThe salary range for this role is $155,000 to $265,500 for this position. Actual salaries will vary depending on factors including but not limited to location, experience, and performance. The range listed is just one component of Brain Co.'s total compensation package for employees. Other benefits may include stock options, an unlimited paid time off policy, and coverage on medical, vision and dental insurance.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
1691021621180

AI Engineer & Researcher - Search

X AI
USD
180000
-
440000
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Team The AI search team aims to make Grok the best AI system for research, information retrieval and gathering real-time information from various sources. The team is responsible for curating high quality data and hard problems for RL training, making evaluations to benchmark AI systems and capture product issues, and building tools to unleash the capabilities of reasoning models. About the Role In this role you might: Build evaluation benchmarks to the next generation of search/research agents. Innovate and curate challenging RL data with scalable synthetic/human data pipeline Innovate data, verification and RL algorithms to build the best open-ended research system. Build various tools to help model exploring seamlessly on the internet. Exceptional candidates may have: Experience with LLM and information retrieval evaluation data curation Experiences with human/synthetic data generation for RL Experience with AI search or Agentic search systems Strong engineering abilities Location The role is based in Palo Alto. Our team usually works from the office 5 days a week but allow work-from-home days when required. Candidates are expected to be located near Palo Alto or open to relocation. Interview Process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15 minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews: Coding assessment in a language of your choice. Researcher technical sessions (2): These sessions will be testing your ability to formulate, design and solve concrete problems in real world with LLM. It can be research or engineering, depending on background/experience. Meet the Team: Present your past exceptional work and your vision with xAI to a small audience. Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet. Annual Salary Range $180,000 - $440,000 USDxAI is an equal opportunity employer. California Consumer Privacy Act (CCPA) Notice
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
Hidden link
1691021621180

AI Engineer & Researcher - Multimodal Post-training

X AI
USD
0
180000
-
440000
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Team The Reasoning Efficiency team at xAI focuses on pushing the boundaries of cost-efficient intelligence.   About the Role In this role you will: Build the next generation of Multimodal Grok that excels at reasoning, tool usage to solve challenging problems Work across the LLM stack (pre-training, SFT, RL) to deliver the strongest model for end users Exceptional candidates may have: Experience or publications in (multimodal) large language models (data / training algorithm / architecture) Exceptional engineering skills to iterate quickly on the data processing and training pipelines Strong understanding of large language model and data Deep knowledge of reinforcement learning techniques Location We hire engineers in Palo Alto. Our team usually works from the office 5 days a week but allow work-from-home days when required. Candidates are expected to be located near Palo Alto or open to relocation. Interview Process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews: Coding assessment in a language of your choice. 2x post-training technical sessions: These sessions will be testing your ability to formulate, design and solve concrete problems in training data for post-training. Meet the Team: Present your past exceptional work and your vision with xAI to a small audience. Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet. Annual Salary Range $180,000 - $440,000 USDxAI is an equal opportunity employer. California Consumer Privacy Act (CCPA) Notice
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
Hidden link
1691021621180

AI Engineer & Researcher - AI Experts

X AI
USD
180000
-
440000
US.svg
United States
Full-time
Remote
false
About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Team The AI Expert team strives to make AI models excellent in scientific and professional domains, such as math, engineering, biology, medicine, and finance. This team will build AI experts in an end-to-end manner, including building model evaluation, curating training data, and developing training recipes. About the Role In this role you might: Innovate next-generation RL algorithms to unleash AI abilities in open-ended science research and real-world professional work. Build evaluation for AI models in expert domains Build RL data infrastructure and generate training data. Exceptional candidates may have: Strong engineering abilities Experience with data collection and data generation for LLMs Optional: Experiences with cross-domain experiences Location We hire engineers in Palo Alto. Our team usually works from the office 5 days a week but allow work-from-home days when required. Candidates are expected to be located near Palo Alto or open to relocation. Interview Process After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews: Coding assessment in a language of your choice. 2x post-training technical sessions: These sessions will be testing your ability to formulate, design and solve concrete problems in training data for post-training. Meet the Team: Present your past exceptional work and your vision with xAI to a small audience. Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet. Annual Salary Range $180,000 - $440,000 USDxAI is an equal opportunity employer. California Consumer Privacy Act (CCPA) Notice
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
Hidden link
No job found
There is no job in this category at the moment. Please try again later