Top Machine Learning Engineer Jobs Openings in 2025

Looking for opportunities in Machine Learning Engineer? This curated list features the latest Machine Learning Engineer job openings from AI-native companies. Whether you're an experienced professional or just entering the field, find roles that match your expertise, from startups to global tech leaders. Updated everyday.

Speechify.jpg

AI Engineer & Researcher, Inference - Austin, USA

Speechify
-
US.svg
United States
Remote
true
PLEASE APPLY THROUGH THIS LINK: https://job-boards.greenhouse.io/speechify/jobs/5287658004  DO NOT APPLY BELOW The mission of Speechify is to make sure that reading is never a barrier to learning. Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day. Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies. This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users. We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.  Our interview process involves several technical interviews and we aim to complete them within 1 week.  What You’ll Do Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases Deploy and operate the core ML inference workloads for our AI Voices serving pipeline Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues An Ideal Candidate Should Have Experience shipping Python-based services Experience being responsible for the successful operation of a critical production service Experience with public cloud environments, GCP preferred Experience with Infrastructure such as Code, Docker, and containerized deployments. Preferred: Experience deploying high-availability applications on Kubernetes. Preferred: Experience deploying ML models to production What We Offer A dynamic environment where your contributions shape the company and its products A team that values innovation, intuition, and drive Autonomy, fostering focus and creativity The opportunity to have a significant impact in a revolutionary industry Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain Think you’re a good fit for this job?  Tell us more about yourself and why you're interested in the role when you apply. And don’t forget to include links to your portfolio and LinkedIn. Not looking but know someone who would make a great fit?  Refer them!  Speechify is committed to a diverse and inclusive workplace.  Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
Speechify.jpg

AI Engineer & Researcher, Inference - San Francisco, USA

Speechify
-
No items found.
Remote
true
PLEASE APPLY THROUGH THIS LINK: https://job-boards.greenhouse.io/speechify/jobs/5287658004  DO NOT APPLY BELOW The mission of Speechify is to make sure that reading is never a barrier to learning. Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day. Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies. This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users. We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.  Our interview process involves several technical interviews and we aim to complete them within 1 week.  What You’ll Do Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases Deploy and operate the core ML inference workloads for our AI Voices serving pipeline Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues An Ideal Candidate Should Have Experience shipping Python-based services Experience being responsible for the successful operation of a critical production service Experience with public cloud environments, GCP preferred Experience with Infrastructure such as Code, Docker, and containerized deployments. Preferred: Experience deploying high-availability applications on Kubernetes. Preferred: Experience deploying ML models to production What We Offer A dynamic environment where your contributions shape the company and its products A team that values innovation, intuition, and drive Autonomy, fostering focus and creativity The opportunity to have a significant impact in a revolutionary industry Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain Think you’re a good fit for this job?  Tell us more about yourself and why you're interested in the role when you apply. And don’t forget to include links to your portfolio and LinkedIn. Not looking but know someone who would make a great fit?  Refer them!  Speechify is committed to a diverse and inclusive workplace.  Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
Apply
Hidden link
Speechify.jpg

Applied AI Engineer & Researcher - Chicago, USA

Speechify
-
US.svg
United States
Remote
true
PLEASE APPLY THROUGH THIS LINK: https://job-boards.greenhouse.io/speechify/jobs/4510121004  DO NOT APPLY BELOW The mission of Speechify is to make sure that reading is never a barrier to learning. Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day. Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies. This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users. We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.  Our interview process involves several technical interviews and we aim to complete them within 1 week.  What You’ll Do Researching and implementing state-of-the-art in NLP and TTS or CV with a focus on image generation Work on building the most human sounding AI speech model in the world An Ideal Candidate Should Have Experience with research and development in NLP or TTS or CV with a focus on image generation. Experience in ML Preferred: Experience deploying NLP or TTS models to production at a large scale Experience managing engineers and growing a research & development team Experience programming in Python: Tensorflow and PyTorch frameworks specifically. What We Offer A fast-growing environment where you can help shape the culture An entrepreneurial crew that supports risk, intuition, and hustle A hands-off approach so you can focus and do your best work The opportunity to make an impact in a transformative industry A competitive salary, a collegiate atmosphere, and a commitment to building a great asynchronous culture Think you’re a good fit for this job?  Tell us more about yourself and why you're interested in the role when you apply. And don’t forget to include links to your portfolio and LinkedIn. Not looking but know someone who would make a great fit?  Refer them!  Speechify is committed to a diverse and inclusive workplace.  Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
NLP Engineer
Software Engineering
Computer Vision Engineer
Software Engineering
Apply
Hidden link
Metropolis Technologies.jpg

Senior Machine Learning Engineer, Computer Vision

Metropolis
USD
0
150000
-
200000
US.svg
United States
Full-time
Remote
false
The Company Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time.   The Role We are seeking a Senior Machine Learning Engineer to play a key role to join our growing team. As a key member of the Advanced Technologies team, you will play a critical role in designing, developing, and deploying state-of-the-art computer vision and recommendation models that power our core products and solutions. Your work will involve tackling challenging problems in object detection, tracking, OCR, video analytics, and multi-modal systems. This role involves a unique blend of technical expertise in data and machine learning, innovative thinking, and a passion for data-driven solutions.     Responsibilities Design, develop, and deploy advanced computer vision models for real-world applications, including object detection, itracking, OCR, image search, and scene understanding. Build and optimize deep learning models, ensuring high accuracy, performance, and scalability for deployment in production environments. Explore and integrate multi-modal approaches, leveraging visual, textual, and other data modalities for robust solutions. Collaborate with cross-functional teams, including data engineers and software engineers to deliver end-to-end solutions. Lead the design and implementation of scalable pipelines for data processing, model training, and model deployment. Optimize models for performance on various hardware platforms, including CPUs, GPUs, and edge devices. Conduct thorough experimentation and A/B testing to validate model effectiveness and ensure alignment with business objectives. Mentor junior team members, providing technical guidance and fostering professional growth. Write clean, efficient, and maintainable code while adhering to best practices in software engineering and machine learning.     Qualifications  MS or PhD (preferred) in Computer Science, Engineering, or a related field, or equivalent work experience.   5+ years of hands-on experience in machine learning and computer vision, with a strong track record of deploying models into production. Proficiency in Python and ML frameworks (PyTorch/TensorFlow/ONNX/TensorRT). Experience with C++ is a plus.  Strong experience with model optimization (e.g., quantization, pruning) and deployment on various platforms (cloud, edge, or mobile). Familiarity with cloud platforms (AWS, GCP, or Azure), containerization (Docker), and orchestration (ECS, Kubernetes) Proven experience in building and maintaining data pipelines (e.g., Airflow).  Strong understanding of the agile development process and CI/CD pipelines and tools (e.g., Github Actions, Jenkins).   Excellent communication skills, capable of presenting complex technical information clearly.   Experience in high-growth, innovative environments is a plus.   Publications in top-tier conferences (e.g., CVPR, ICCV, NeurIPS) are a strong plus.   When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. The anticipated base salary for this position is $150,000.00 to $200,000.00 annually. The actual base salary offered is determined by a number of variables, including, as appropriate, the applicant's qualifications for the position, years of relevant experience, distinctive skills, level of education attained, certifications or other professional licenses held, and the location of residence and/or place of employment. Base salary is one component of Metropolis’s total compensation package, which may also include access to or eligibility for healthcare benefits, a 401(k) plan, short-term and long-term disability coverage, basic life insurance, a lucrative stock option plan, bonus plans and more.  #LI-NM1 #LI-Onsite Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.  
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Apply
Hidden link
Metropolis Technologies.jpg

Senior Machine Learning Engineer, Computer Vision

Metropolis
USD
0
150000
-
200000
US.svg
United States
Full-time
Remote
false
The Company Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time.   The Role We are seeking a Senior Computer Vision Engineer to lead the development of advanced multi-camera perception and localization systems with an integrated focus on image-based search, vector database integration, and re-ranking strategies. You will design and build algorithms that combine object tracking, scene understanding, and cross-camera reasoning with scalable retrieval systems to power high-precision localization and visual matching across large-scale deployments. This role requires strong expertise in computer vision, real-time systems, and search infrastructure, with a focus on turning visual data into actionable spatial intelligence.   Responsibilities Design and implement algorithms for multi-camera object detection, classification, and persistent tracking. Build scene understanding modules to extract landmarks, spatial layout, and semantic context from image streams. Develop cross-camera fusion and localization methods for consistent identification and positioning of objects. Architect and deploy visual search systems using vector databases (e.g., OpenSearch, FAISS, Milvus) for image-based retrieval and matching. Design and implement re-ranking techniques to improve retrieval precision based on context, metadata, and scene cues. Create tools and metrics to evaluate retrieval quality, localization accuracy, and perception robustness. Collaborate across ML, backend, and infrastructure teams to ensure scalable, real-time deployment. Investigate system-level issues, drive debugging efforts, and improve model and system performance. Mentor junior engineers and contribute to long-term vision for perception, localization, and image retrieval.   Qualifications  M.S. or Ph.D. in Computer Science, Robotics, Electrical Engineering, or a related field. 5+ years of industry experience in computer vision, image retrieval, or perception systems. Strong background in object detection, tracking, and scene understanding using multi-camera inputs. Deep understanding of vector-based retrieval systems and experience with OpenSearch, FAISS, or similar tools. Proficiency in Python or C++, with hands-on experience in PyTorch, TensorFlow, and OpenCV. Experience in building large-scale image retrieval pipelines, including feature extraction, indexing, and search optimization. Knowledge of multi-view geometry, and cross-camera identity association. Experience evaluating and tuning re-ranking strategies using contextual and multi-modal signals. Exposure to cloud-based deployment of search systems (e.g., OpenSearch cluster tuning, sharding, replication). Experience with edge deployment of perception pipelines (e.g., Jetson, Qualcomm). Publications or patents in the fields of visual search, localization, or multi-camera perception.   When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. The anticipated base salary for this position is $150,000.00 to $200,000.00 annually. The actual base salary offered is determined by a number of variables, including, as appropriate, the applicant's qualifications for the position, years of relevant experience, distinctive skills, level of education attained, certifications or other professional licenses held, and the location of residence and/or place of employment. Base salary is one component of Metropolis’s total compensation package, which may also include access to or eligibility for healthcare benefits, a 401(k) plan, short-term and long-term disability coverage, basic life insurance, a lucrative stock option plan, bonus plans and more.  #LI-AR1 #LI-Onsite Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.  
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Apply
Hidden link
Labelbox.jpg

Applied Research Intern

Labelbox
USD
0
35
-
45
US.svg
United States
PL.svg
Poland
Intern
Remote
false
Shape the Future of AI At Labelbox, we're building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we've been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially. About Labelbox We're the only company offering three integrated solutions for frontier AI development: Enterprise Platform & Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling Why Join Us High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You'll take on expanded responsibilities quickly, with career growth directly tied to your contributions. Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence. Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution. Continuous Growth: Every role requires continuous learning and evolution. You'll be surrounded by curious minds solving complex problems at the frontier of AI. Clear Ownership: You'll know exactly what you're responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics. Role Overview As an Applied Research intern at Labelbox, you will design, build, and productionize evaluation and post‑training systems for frontier LLMs and multimodal models. You’ll own continuous, high-quality evals and benchmarks (reasoning, code, agent/tool‑use, long‑context, vision‑language, et al.), create and curate post‑training datasets (human + synthetic), and prototype RLHF/RLAIF/RLVR/RM/DPO‑style training loops to measure and improve real‑world task and agent performance. Your Impact Build and own evaluation and benchmark suites for reasoning, code, agents, long‑context, and V/LLMs. Create post‑training datasets at scale: design preference/critique pipelines (human + synthetic), and target hard failures surfaced by evals. Experiment and prototype RLHF/RLAIF/RLVR/RM/DPO‑style training loops to improve real-world task and agent performance. Land research in product: ship improvements into Labelbox workflows, services, and customer‑facing evaluation/quality features; quantify impact with customer and internal metrics. Engage with customer research teams: run pilots, co‑design benchmarks, and share practical findings through internal research reports, blog posts, talks, and published papers. What You Bring A strong foundation in AI and machine learning, backed by a Ph.D. or Master’s degree in Computer Science, Machine Learning, AI, or a related field (in progress degrees are acceptable for intern positions). A deep understanding of frontier autoregressive and diffusion multimodal models, along with the human and synthetic data strategies needed to optimize them. Passion and experience for LLM evaluation and benchmarking. Expertise in training data quality construction, measurement and refinement. The ability to bridge research and application by interpreting new findings and translating them into functional prototypes. A track record of publishing in top-tier AI/ML conferences (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, NAACL) and contributing to the broader research community. Proficiency in Python and experience with deep learning frameworks like PyTorch, JAX, or TensorFlow. Exceptional communication and collaboration skills. Applied Research at Labelbox At Labelbox Applied Research, we're committed to pushing the boundaries of AI and data-centric machine learning, with a particular focus on advancing human-AI interaction techniques. We believe that high-quality human data and sophisticated human feedback integration methods are key to unlocking the next generation of AI capabilities. Our research team works at the intersection of machine learning, human-computer interaction, and AI ethics to develop innovative solutions that can be practically applied in real-world scenarios.Labelbox strives to ensure pay parity across the organization and discuss compensation transparently.  The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.Annual base salary range$35—$45 USDLife at Labelbox Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making Growth: Career advancement opportunities directly tied to your impact Vision: Be part of building the foundation for humanity's most transformative technology Our Vision We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs. Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs. Your Personal Data Privacy: Any personal information you provide Labelbox as a part of your application will be processed in accordance with Labelbox’s Job Applicant Privacy notice. Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
Apply
Hidden link
Faculty.jpg

Machine Learning Engineer

Faculty
-
GB.svg
United Kingdom
Full-time
Remote
false
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With more than a decade of experience, we provide over 350 global customers with software, bespoke AI consultancy, and Fellows from our award winning Fellowship programme. Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you’ll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. We operate a hybrid way of working, meaning that you'll split your time across client location, Faculty's Old Street office and working from home depending on the needs of the project. About the RoleYou will design, build, and deploy production-grade software, infrastructure, and MLOps systems that leverage machine learning. The work you do will help our customers solve a broad range of high-impact problems in the Government & Public Services arena. Because of the potential to work with our clients in the National Security space, you will need to be eligible for Security Clearance, details of which are outlined when you click through to apply. What You'll Be DoingYou are engineering-focused, with a keen interest and working knowledge of operationalised machine learning. You have a desire to take cutting-edge ML applications into the real world. You will develop new methodologies and champion best practices for managing AI systems deployed at scale, with regard to technical, ethical and practical requirements. You will support both technical, and non-technical stakeholders, to deploy ML to solve real-world problems. Our Machine Learning Engineerings are responsible for the engineering aspects of our customer delivery projects. As a Machine Learning Engineer, you’ll be essential to helping us achieve that goal by:Building software and infrastructure that leverages Machine Learning;Creating reusable, scalable tools to enable better delivery of ML systemsWorking with our customers to help understand their needsWorking with data scientists and engineers to develop best practices and new technologies; andImplementing and developing Faculty’s view on what it means to operationalise ML software.As a rapidly growing organisation, roles are dynamic and subject to change. Your role will evolve alongside business needs, but you can expect your key responsibilities to include:Working in cross-functional teams of engineers, data scientists, designers and managers to deliver technically sophisticated, high-impact systems.Working with senior engineers to scope projects and design systemsProviding technical expertise to our customersTechnical DeliveryWho We're Looking ForYou can view our company principles here. We look for individuals who share these principles and our excitement to help our customers reap the rewards of AI responsibly.To succeed in this role, you’ll need the following - these are illustrative requirements and we don’t expect all applicants to have experience in everything (70% is a rough guide):Understanding of, and experience with the full machine learning lifecycleWorking with Data Scientists to deploy trained machine learning models into production environments Working with a range of models developed using common frameworks such as Scikit-learn, TensorFlow, or PyTorchExperience with software engineering best practices and developing applications in Python.Technical experience of cloud architecture, security, deployment, and open-source tools ideally with one of the 3 major cloud providers (AWS, GPS or Azure)Demonstrable experience with containers and specifically Docker and KubernetesAn understanding of the core concepts of probability and statistics and familiarity with common supervised and unsupervised learning techniquesDemonstrable experience of managing/mentoring more junior members of the team Outstanding verbal and written communication.Excitement about working in a dynamic role with the autonomy and freedom you need to take ownership of problems and see them through to execution We like people who combine expertise and ambition with optimism -- who are interested in changing the world for the better -- and have the drive and intelligence to make it happen. If you’re the right candidate for us, you probably:Think scientifically, even if you’re not a scientist - you test assumptions, seek evidence and are always looking for opportunities to improve the way we do things.Love finding new ways to solve old problems - when it comes to your work and professional development, you don’t believe in ‘good enough’. You always seek new ways to solve old challenges.Are pragmatic and outcome-focused - you know how to balance the big picture with the little details and know a great idea is useless if it can’t be executed in the real world.What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You’ll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you’ll learn something new from everyone you meet.
Machine Learning Engineer
Data Science & Analytics
DevOps Engineer
Data Science & Analytics
Apply
Hidden link
Distyl.jpg

Forward Deployed AI Engineer (All Levels)

Distyl
-
US.svg
United States
Full-time
Remote
false
Distyl AI develops AI-native technologies that enable humans and AI to collaboratively power operations across the Global Fortune 1000. In just 24 months, we’ve partnered with some of the world’s most iconic enterprises including F100 telecom, healthcare, manufacturing, insurance, and retail companies delivering AI deployments with $100M+ in impact.Our platform, Distillery, combined with our cross-functional teams of AI Engineers, Researchers, and Strategists, is pioneering AI-native systems of work that solve high-stakes challenges at scale. Distyl is founded and led by proven leaders from companies like Palantir, Apple, and national research labs. We maintain a deep partnership with OpenAI and are backed by Lightspeed, Khosla, Coatue, Nat Friedman, and board members from over 20 Fortune 500 companies.What We’re Looking ForWe’re hiring Forward-Deployed AI Engineers at all levels of experience, from early-career builders to senior engineers who can lead large-scale deployments and shape our customers' technical roadmap. Regardless of seniority, every AI Engineer at Distyl is a hands-on contributor responsible for designing, implementing, and deploying production-grade AI systems using Large Language Models (LLMs).At the most senior levels, our engineers operate as the technical owners of F500 engagements, defining architecture, engaging with enterprise leaders, and leading technical teams to transform mission-critical workflows.ResponsibilitiesAll AI Engineers at Distyl are expected to:Design and Deploy LLM-Powered Systems: Build and deploy robust GenAI systems that deliver measurable business value. Includes model evaluation, prompt design, agent/agentic logic, and full-stack application development.Engage Deeply with Customers: Work directly with customer stakeholders to understand their most pressing business and technical needs, and turn those into tailored, high-impact AI solutions.Build and Improve Our Platform: Contribute to the development of Distillery, our internal LLM application platform, by developing reusable tools, infrastructure, and workflows that scale across customers.Deliver Production Quality: Ensure systems are observable, maintainable, and meet rigorous reliability, performance, and security standards.Evaluate AI Systems Scientifically: Lead evaluation efforts balancing accuracy, latency, explainability, cost, and robustness.Continuously Improve: Refine development workflows, platform architecture, and deployment practices to raise the engineering bar.At higher levels of experience, you will also:Set technical direction and architecture for multi-million-dollar customer engagements.Earn executive trust and guide enterprise engineers through rollout and long-term operations.Mentor and up-level Distyl engineers via design reviews, architectural design reviews, leading by doing, and targeted coaching.QualificationsWe believe great AI Engineers come from diverse backgrounds and are unified by deep curiosity, pragmatism, and engineering excellence. We’re excited to meet candidates with:Proficiency in Python or Typescript (experience with a diverse set of languages is a plus), with hands-on experience using LLM tooling such as LangChain, LlamaIndex, Guardrails, MCP and Agents SDK.Experience building and deploying LLM-powered AI agents in production, including tool use, prompt orchestration, RAG, and long-horizon task execution.Deep understanding of agent design patterns, with practical experience in observability, debugging, and human-in-the-loop feedback systems.Skilled at solving complex, ambiguous problems using cutting-edge AI techniques to deliver meaningful business outcomes.Familiar with responsible AI practices (e.g., auditability, interpretability), and experienced in aligning AI systems with enterprise-grade requirements.Comfortable working across cloud environments (AWS, GCP, Azure) with modern DevOps tools (Docker, CI/CD), and collaborating closely with cross-functional teams.For senior roles: strong architectural and systems design, experience leading projects, navigating and deploying into enterprise hybrid cloud environments and mentor othersWhat We OfferCompetitive salary and meaningful equity.100% covered medical, dental, and vision for employees and dependents.401(k) with additional perks (commuter benefits, in-office lunch).Access to state-of-the-art models, generous usage of modern AI tools, and real-world business problems.Ownership of high-impact projects across top enterprises.A mission-driven, fast-moving culture that prizes curiosity, pragmatism, and excellence. Offices in San Francisco and New York with hybrid collaboration (3+ days/week in-office).
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
Crusoe.jpg

Staff Software Engineer, Managed AI

Crusoe
USD
0
204000
-
247000
US.svg
United States
Full-time
Remote
false
Crusoe is building the World’s Favorite AI-first Cloud infrastructure company. We’re pioneering vertically integrated,  purpose-built AI infrastructure solutions trusted by Fortune 500 companies to power their most advanced AI applications. Crusoe is redefining AI cloud infrastructure, with a mission to align the future of computing with the future of the climate. Our AI platform is recognized as the "gold standard" for reliability and performance. Our data centers are optimized for AI workloads and are powered by clean, renewable energy. Be part of the AI revolution with sustainable technology at Crusoe. Here, you'll drive meaningful innovation, make a tangible impact, and join a team that’s setting the pace for responsible, transformative cloud infrastructure.About This Role:As a Staff Software Engineer on the Managed AI team at Crusoe, you'll have a pivotal role in shaping the architecture and scalability of our next-generation AI inference platform. You will lead the design and implementation of core systems for our AI services, including resilient fault-tolerant queues, model catalogs, and scheduling mechanisms optimized for cost and performance. This role gives you the opportunity to build and scale infrastructure capable of handling millions of API requests per second across thousands of customers.From day one, you'll own critical subsystems for managed AI inference, helping to serve large language models (LLMs) to a global audience. As part of a dynamic, fast-growing team, you’ll collaborate cross-functionally, influence the long-term vision of the platform, and contribute to cutting-edge AI technologies. This is a unique opportunity to build a high-performance AI product that will be central to Crusoe's business growth.This is an on-site role based in San Francisco, CA, or Sunnyvale, CA, requiring in-office presenceWhat You’ll Be Working On:Design and Development:Lead the design and implementation of core AI services, including:Resilient fault-tolerant queues for efficient task distribution.Model catalogs for managing and versioning AI models.Scheduling mechanisms optimized for cost and performance.High-performance APIs for serving AI models to customers.Scalability and Performance:Build and scale infrastructure to handle millions of API requests per second.Optimize AI inference performance on GPU-based systems.Implement robust monitoring and alerting to ensure system health and availability.Collaboration and Innovation:Collaborate closely with product management, business strategy, and other engineering teams.Influence the long-term vision and architectural decisions of the AI platform.Contribute to open-source AI frameworks and participate in the AI community.Prototype and iterate on new features and technologies.What You’ll Bring to the Team:Strong Engineering Fundamentals:Advanced degree in Computer Science, Engineering, or a related field.Demonstrable experience in distributed systems design and implementation.Proven track record of delivering early-stage projects under tight deadlines.Expertise in using cloud-based services, such as, elastic compute, object storage, virtual private networks, managed database, etc.AI/ML Expertise:Experience in Generative AI (Large Language Models, Multimodal).Familiarity with AI infrastructure, including training, inference, and ETL pipelines.Software Engineering Skills:Experience with container runtimes (e.g., Kubernetes) and microservices architectures.Experience using REST APIs and common communication protocols, such as gRPC.Demonstrated experience in the software development cycle and familiarity with CI/CD tools.Preferred Qualifications:Proficiency in Golang or Python for large-scale, production-level services.Contributions to open-source AI projects such as VLLM or similar frameworks.Performance optimizations on GPU systems and inference frameworks.Personal Attributes:Proactive and collaborative approach with the ability to work autonomously.Strong communication and interpersonal skills.Passion for building cutting-edge AI products and solving challenging technical problems.Benefits:Industry competitive payRestricted Stock Units in a fast growing, well-funded technology companyHealth insurance package options that include HDHP and PPO, vision, and dental for you and your dependentsEmployer contributions to HSA accountsPaid Parental LeavePaid life insurance, short-term and long-term disabilityTeladoc401(k) with a 100% match up to 4% of salaryGenerous paid time off and holiday scheduleCell phone reimbursementTuition reimbursementSubscription to the Calm appMetLife LegalCompany paid Commuter FSA benefit of $200 per monthCompensation:Compensation will be paid in the range of $204,000 - $247,000 a year + Bonus. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
Crusoe.jpg

Engineering Manager, Managed AI

Crusoe
USD
0
233000
-
282000
US.svg
United States
Full-time
Remote
false
Crusoe is building the World’s Favorite AI-first Cloud infrastructure company. We’re pioneering vertically integrated,  purpose-built AI infrastructure solutions trusted by Fortune 500 companies to power their most advanced AI applications. Crusoe is redefining AI cloud infrastructure, with a mission to align the future of computing with the future of the climate. Our AI platform is recognized as the "gold standard" for reliability and performance. Our data centers are optimized for AI workloads and are powered by clean, renewable energy. Be part of the AI revolution with sustainable technology at Crusoe. Here, you'll drive meaningful innovation, make a tangible impact, and join a team that’s setting the pace for responsible, transformative cloud infrastructure.About This Role:As an Engineering Manager on the Managed AI team at Crusoe, you will play a critical role in leading and scaling a team of engineers building our next-generation AI inference platform. You will be responsible for guiding the team through the design and implementation of high-performance, fault-tolerant infrastructure to serve large language models (LLMs) at scale.This role combines technical depth with people leadership — you'll help shape the engineering roadmap, drive execution across key projects, and mentor a team of high-caliber engineers. As part of a fast-growing, strategically important organization, you’ll partner closely with product, business, and platform stakeholders to deliver a performant and reliable platform powering AI for customers worldwide.This is an on-site role based in San Francisco, CA, or Sunnyvale, CA, requiring in-office presenceWhat You’ll Be Working On:Team Leadership & Strategy:Lead the design and implementation of core AI services, including:Manage and grow a team of software engineers working on Crusoe’s Managed AI platformSet clear goals, drive accountability, and support career development for team membersPartner with product and engineering leadership to define and execute on the AI roadmapFoster a high-performance, collaborative engineering culture grounded in technical excellenceTechnical Execution:Oversee the architecture and development of core AI services, including:Fault-tolerant task queuesModel management systemsCost-aware scheduling frameworksHigh-performance APIs for real-time inferenceEnsure the team delivers scalable systems capable of handling millions of API requests per secondGuide the team in implementing performance optimizations for GPU-based inference workloadsCollaboration and Influence:Work cross-functionally with product managers, infrastructure teams, and GTM stakeholdersRepresent engineering in strategic discussions related to AI platform growth and customer adoptionPromote knowledge sharing, technical mentorship, and operational best practices within the teamContribute to the evolution of Crusoe’s engineering processes and broader AI strategyWhat You’ll Bring to the Team:Leadership Experience:2+ years of experience managing high-performing engineering teamsStrong track record of hiring, developing, and retaining engineering talentAbility to lead teams through ambiguity and drive alignment around complex technical goalsTechnical Depth:Prior hands-on experience building distributed systems or AI infrastructureDeep understanding of cloud-native environments, container orchestration, and service-oriented architecturesFamiliarity with GPU performance, inference frameworks, or LLM-based systems is a strong plusProduct & Delivery Focus:Comfortable owning deliverables from early design through production rolloutStrong collaboration skills, with a bias for clarity, context sharing, and customer impactExperience operating in fast-paced startup or growth-stage environmentsPreferred Qualifications:Background in Computer Science, Engineering, or a related technical fieldProficiency in Python or GolangFamiliarity with open-source AI ecosystems (e.g., VLLM, Hugging Face, Triton)Experience working with Kubernetes, gRPC, and observability stacksPersonal Attributes:Growth-minded leader who empowers and supports othersExcellent communicator and relationship builderPassionate about building world-class AI infrastructure and teamsBenefits:Industry competitive payRestricted Stock Units in a fast growing, well-funded technology companyHealth insurance package options that include HDHP and PPO, vision, and dental for you and your dependentsEmployer contributions to HSA accountsPaid Parental LeavePaid life insurance, short-term and long-term disabilityTeladoc401(k) with a 100% match up to 4% of salaryGenerous paid time off and holiday scheduleCell phone reimbursementTuition reimbursementSubscription to the Calm appMetLife LegalCompany paid Commuter FSA benefit of $200 per monthCompensation:Compensation will be paid in the range of $233,000 - $282,000 a year + Bonus. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
Wispr Flow.jpg

ML Engineer

Wispr Flow
USD
0
130000
-
240000
US.svg
United States
Full-time
Remote
true
About WisprWispr Flow is making it as effortless to interact with your devices as talking to a close friend. Voice is the most natural, powerful way to communicate — and we’re building the interfaces to make that a reality.Today, Wispr Flow is the first voice dictation platform people use more than their keyboards — because it understands you perfectly on the first try. It’s context-aware, personalized, and works anywhere you can type, on desktop or phone.Dictation is just our first act. We’re building the interaction layer for your computer — a system that’s capable, understands you, and earns your trust. It will start by writing for you, then move to taking actions, and ultimately anticipate your needs before you ask.We’re a team of AI researchers, designers, growth experts, and engineers rethinking human-computer interaction from the ground up. We value high-agency teammates who communicate openly, obsess over users, and sweat the details. We thrive on spirited debate, truth-seeking, and real-world impact.This year, we've grown our revenue 50% month-over-month and with our latest $30M Series A, this is just the beginning.About the RoleAs a ML engineer at Wispr, you’ll play a crucial role in building the first capable, habit forming voice interface that scales to a billion users. Members of our technical staff are responsible for prototyping and designing new features of our voice interface, building infrastructure to handle <500ms LLM inference for millions of requests from everywhere around the world, and scaling the personalization of our speech models and LLMs with fine-tuning and RL.What are we looking for?Previous founding or startup experienceExperience optimizing ML inference or engineering systems for research teamsFluency in Python and LLM developmentAttention to detail and eagerness to learnAptitude and clarity of thoughtCreativity, excellence in engineering, and code velocityWe consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
Wispr Flow.jpg

ML Scientist

Wispr Flow
USD
0
140000
-
240000
US.svg
United States
Full-time
Remote
true
About WisprWispr Flow is making it as effortless to interact with your devices as talking to a close friend. Voice is the most natural, powerful way to communicate — and we’re building the interfaces to make that a reality.Today, Wispr Flow is the first voice dictation platform people use more than their keyboards — because it understands you perfectly on the first try. It’s context-aware, personalized, and works anywhere you can type, on desktop or phone.Dictation is just our first act. We’re building the interaction layer for your computer — a system that’s capable, understands you, and earns your trust. It will start by writing for you, then move to taking actions, and ultimately anticipate your needs before you ask.We’re a team of AI researchers, designers, growth experts, and engineers rethinking human-computer interaction from the ground up. We value high-agency teammates who communicate openly, obsess over users, and sweat the details. We thrive on spirited debate, truth-seeking, and real-world impact.This year, we've grown our revenue 50% month-over-month and with our latest $30M Series A, this is just the beginning.About the RoleAs a ML scientist at Wispr, you’ll play a crucial role in building the first capable, habit forming voice interface that scales to a billion users. Members of our technical staff are responsible for prototyping and designing new features of our voice interface, building infrastructure to handle <500ms LLM inference for millions of requests from everywhere around the world, and scaling the personalization of our speech models and LLMs. You'll be primarily responsible for ML personalization, training speech models, and using fine-tuning and RL techniques to improve LLMs.What are we looking for?PhD in machine learning or related field (neuroscience, EE, etc)Strong publication track record at top conferences (ICML, NeurIPS, ICLR, ICASSP)Fluency in Python and LLM developmentAttention to detail and eagerness to learnAptitude and clarity of thoughtCreativity in R&D, excellence in engineering, and code velocityWe consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
Machine Learning Engineer
Data Science & Analytics
NLP Engineer
Software Engineering
Research Scientist
Product & Operations
Apply
Hidden link
Metropolis Technologies.jpg

Senior Machine Learning Software Engineer

Metropolis
USD
0
170000
-
200000
US.svg
United States
Full-time
Remote
false
The Company   Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time.   The Role Metropolis is seeking a Senior Machine Learning Software Engineer to accelerate the development of our proprietary computer vision and machine learning software that powers our mobility products.  As part of the Metropolis Machine Learning Team, you will be responsible for the development, deployment, and ongoing optimization of edge-deployed software. These systems are foundational to the Metropolis platform and have correspondingly large potential impacts on Metropolis and its customers.  You will find this to be a challenging opportunity filled with unique technical and operational considerations while being able to learn from and leverage our existing Computer Vision based development and operational ecosystem.  The right candidate will possess a strong background in C++ and OpenCV, experience with computer vision and ML on edge / embedded systems, and demonstrated experience taking complex software systems from concept to production.  You can expect to be working on all stages of the software development pipeline – from problem analysis and design to prototyping and deployment.  You should be able to thrive and succeed in an entrepreneurial setting, working collaboratively in a fast-paced environment with multiple stakeholders.  You won’t be afraid to break new technological ground at Metropolis and are more than willing to roll up your sleeves, dig in and get the job done.   Responsibilities Work with the Machine Learning Team to design, develop, improve, and optimize computer vision, machine learning and application software on edge devices using C++.  Participate in all phases of embedded software development, from concept and design to deployment and maintenance.  Identify top-level software requirements and establish development best practices.  Deliver high-quality C++ code in a real-time embedded environment.  Perform optimization on Machine Learning models targeting different hardware accelerators e.g. CUDA cores, Qualcomm DSP, etc.  Implement, manage, and support over-the-air software updates to edge systems.  Communicate ideas and results effectively, verbally and in writing, to a wide range of technical and non-technical audiences.    Qualifications  BS, MS, or Ph.D. in a Computer Science and Engineering or relevant discipline.  5+ years experience in modern software design, development, version control, refactoring, and testing  5+ years of experience with C++17 onward and a strong understanding of object-oriented programming  3+ years of experience working with C++ OpenCV, SQLLite and MQTT.  Experience in parallel computing, accelerator architecture, CUDA, Qualcomm DSP, and TensorRT libraries.  Experience with ARM Cortex series microcontroller  Excellent written and verbal communication skills with a proven ability to present complex technical information in a clear and concise manner to a variety of audiences  Previous experience working inside innovative, high-growth environments  Strong preference for candidates to be local to the Seattle area. Will also consider candidates in Los Angeles, Seattle, and New York.   When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. The anticipated base salary for this position is $170,000.00 to $200,000.00 annually. The actual base salary offered is determined by a number of variables, including, as appropriate, the applicant's qualifications for the position, years of relevant experience, distinctive skills, level of education attained, certifications or other professional licenses held, and the location of residence and/or place of employment. Base salary is one component of Metropolis’s total compensation package, which may also include access to or eligibility for healthcare benefits, a 401(k) plan, short-term and long-term disability coverage, basic life insurance, a lucrative stock option plan, bonus plans and more.  #LI-AR1 #LI-Onsite Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.  
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Software Engineer
Software Engineering
Apply
Hidden link
Metropolis Technologies.jpg

Senior Machine Learning Software Engineer

Metropolis
USD
0
170000
-
200000
US.svg
United States
Full-time
Remote
false
The Company   Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time.   The Role Metropolis is seeking a Senior Machine Learning Software Engineer to accelerate the development of our proprietary computer vision and machine learning software that powers our mobility products.  As part of the Metropolis Machine Learning Team, you will be responsible for the development, deployment, and ongoing optimization of edge-deployed software. These systems are foundational to the Metropolis platform and have correspondingly large potential impacts on Metropolis and its customers.  You will find this to be a challenging opportunity filled with unique technical and operational considerations while being able to learn from and leverage our existing Computer Vision based development and operational ecosystem.  The right candidate will possess a strong background in C++ and OpenCV, experience with computer vision and ML on edge / embedded systems, and demonstrated experience taking complex software systems from concept to production.  You can expect to be working on all stages of the software development pipeline – from problem analysis and design to prototyping and deployment.  You should be able to thrive and succeed in an entrepreneurial setting, working collaboratively in a fast-paced environment with multiple stakeholders.  You won’t be afraid to break new technological ground at Metropolis and are more than willing to roll up your sleeves, dig in and get the job done.   Responsibilities Work with the Machine Learning Team to design, develop, improve, and optimize computer vision, machine learning and application software on edge devices using C++.  Participate in all phases of embedded software development, from concept and design to deployment and maintenance.  Identify top-level software requirements and establish development best practices.  Deliver high-quality C++ code in a real-time embedded environment.  Perform optimization on Machine Learning models targeting different hardware accelerators e.g. CUDA cores, Qualcomm DSP, etc.  Implement, manage, and support over-the-air software updates to edge systems.  Communicate ideas and results effectively, verbally and in writing, to a wide range of technical and non-technical audiences.    Qualifications  BS, MS, or Ph.D. in a Computer Science and Engineering or relevant discipline.  5+ years experience in modern software design, development, version control, refactoring, and testing  5+ years of experience with C++17 onward and a strong understanding of object-oriented programming  3+ years of experience working with C++ OpenCV, SQLLite and MQTT.  Experience in parallel computing, accelerator architecture, CUDA, Qualcomm DSP, and TensorRT libraries.  Experience with ARM Cortex series microcontroller  Excellent written and verbal communication skills with a proven ability to present complex technical information in a clear and concise manner to a variety of audiences  Previous experience working inside innovative, high-growth environments  Strong preference for candidates to be local to the Seattle area. Will also consider candidates in Los Angeles, Seattle, and New York.   When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. The anticipated base salary for this position is $170,000.00 to $200,000.00 annually. The actual base salary offered is determined by a number of variables, including, as appropriate, the applicant's qualifications for the position, years of relevant experience, distinctive skills, level of education attained, certifications or other professional licenses held, and the location of residence and/or place of employment. Base salary is one component of Metropolis’s total compensation package, which may also include access to or eligibility for healthcare benefits, a 401(k) plan, short-term and long-term disability coverage, basic life insurance, a lucrative stock option plan, bonus plans and more.  #LI-AR1 #LI-Onsite Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.  
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Software Engineer
Software Engineering
Apply
Hidden link
krea.ai

Machine Learning Engineer

Krea
-
US.svg
United States
Full-time
Remote
false
About KreaAt Krea, we are building next-generation AI creative tools.We are dedicated to making AI intuitive and controllable for creatives. Our mission is to build tools that empower human creativity, not replace it.We believe AI is a new medium that allows us to express ourselves through various formats—text, images, video, sound, and even 3D. We're building better, smarter, and more controllable tools to harness this medium. This jobWe're looking for a machine learning engineers who can work on large-scale image and video models training experiments..Some stuff you can do:Train foundation diffusion models for image and video generation.Train controllability modules such as IPAdapters or ControlNets.Develop novel research techniques and put them into production.Conducting large-scale experiments on high-performance computing clusters, optimizing data pipelines for massive image datasetsExample experience and skills we’re looking forProven track record in working with image or video models at scale (publications or open-source contributions a plus)Strong background in deep learning frameworks and distributed training paradigms.Ability to iterate rapidly, and propose creative research directionsA bit more about usWe’ve raised over $83M and are backed by world-class Silicon Valley investors such as Andreessen Horiwitz, and the cofounder of the Meta AI Research laboratory (FMK as Facebook AI Research) or founding members of OpenAI.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
Scale AI.jpg

Applied AI Engineering Manager, Enterprise

Scale AI
USD
0
212000
-
254400
US.svg
United States
Full-time
Remote
false
AI is becoming vitally important in every function of our society. At Scale, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we’re accelerating the usage of frontier data and models by building complex agents for enterprises around the world through our Scale Generative AI Platform (SGP). The SGP ML team works on the front lines of this AI revolution. We interface directly with clients to build cutting edge products using the arsenal of proprietary research and resources developed at Scale. As an AAI Engineering Manager, you’ll manage a team of high-calibre Applied AI Engineers + MLEs who work with clients to train ML models to satisfy their business needs. Your team’s work will range from training next-generation AI cybersecurity firewall LLMs to training foundation agentic action models making predictions about business-saving outcomes. You will guide your  team towards using data-driven experiments to provide key insights around model strengths and inefficiencies in an effort to improve products. If you are excited about shaping the future of the modern AI movement, we would love to hear from you! You will:  Train state of the art models, developed both internally and from the community, in production to solve problems for our enterprise customers.  Manage a team of 5+ Applied AI Engineers / ML Engineers Work with product and research teams to identify opportunities for ongoing and upcoming services. Explore approaches that integrate human feedback and assisted evaluation into existing product lines.  Create state of the art techniques to integrate tool-calling into production-serving LLMs. Work closely with customers - some of the most sophisticated ML organizations in the world - to quickly prototype and build new deep learning models targeted at multi-modal content understanding problems. Ideally you’d have: At least 3 years of model training, deployment and maintenance experience in a production environment At least 1-2 years of management or tech leadership experience Strong skills in NLP, LLMs and deep learning  Solid background in algorithms, data structures, and object-oriented programming Experience working with a cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment Experience building products with LLMs including knowing the ins and outs of evaluation, experimentation, and designing solutions to get the most of the models PhD or Masters in Computer Science or a related field Nice to haves: Experience in dealing with large scale AI problems, ideally in the generative-AI field Demonstrated expertise in large vision-language models for diverse real-world applications, e.g. classification, detection, question-answering, etc.  Published research in areas of machine learning at major conferences (NeurIPS, ICML, EMNLP, CVPR, etc.) and/or journals Strong high-level programming skills (e.g., Python), frameworks and tools such as DeepSpeed, Pytorch lightning, kubeflow, TensorFlow, etc.  Strong written and verbal communication skills to operate in a cross functional team environment Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$212,000—$254,400 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI.  Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
Apply
Hidden link
voize.jpg

ML Engineer - Speech (m/f/d)

Voize
-
GE.svg
Germany
Full-time
Remote
true
🎤 Why voize? Because we’re more than just a job!At voize, we’re revolutionizing the healthcare industry with AI: Nurses simply speak their documentation into their smartphones, and our AI automatically generates the correct entries. This saves each nurse an average of 39 minutes per day, improves the quality of documentation, and makes their daily work much more rewarding.voize is YCombinator-funded, already in use at over 600 senior care homes, and has grown by 100% in the last 90 days. Our customers save over 3.5 million hours annually – time spent on people instead of paperwork.But this is just the beginning. With our self-developed voize AI, we’re transforming not only the healthcare industry, but also have the potential to create value in many other sectors – from healthcare to inspections.💡 Your Mission:If you're a Machine Learning Engineer experienced with Speech recognition and are excited to work at the cutting edge of product design, applied ML research, and MLOps, then go ahead and apply! With us, you'll build products with direct user feedback, train AI models with real data, and ship new features to production every day.🤝 Your Skillset – What you bring to the tableSeveral years hands-on experience in Deep Learning for speech recognition, including developing and optimising ASR-systems (not just academic research)Excellent foundation in STT (Speech-to-Text) system development with a focus on real-world applicationsExperience with owning the ML process end-to-end: from concept and exploration, to model productionization, maintenance, monitoring and optimizationShipped ML models to production with Python and PyTorchTrained new models from scratch and not just fine-tuning existing ones🚀 Your Daily Business – No two days are alikeTake ownership for the design, training, evaluation, and deployment of our deep learning models in the space of speech recognitionThe models you build and refine are at the heart of our applications and directly impact the end-userYou'll get to engineer large self-supervised trainings as well as fast inference for mobile devices and hosted environments🎯 Our Success Mindset – How we work at voizeResilience is one of your strengths – you see challenges as opportunities, not obstaclesIterative working suits you – you test, learn, and improve constantly instead of waiting for perfectionCommunication & feedback come naturally to you – you openly address issues and both give and receive constructive feedback🌱 Growing together – what you can expect at voizeBecome a co-creator of our success with virtual stock optionsOur office is in Berlin, and we offer remote workWe provide flexible working hours because you know best when you work most efficiently!Access to various learning platforms (e.g., Blinkist, Audible, etc.)We have an open culture and organize regular work weeks and team events to collaborate and bondWe are a fast-growing startup, so you'll encounter various challenges, providing the perfect foundation for rapid personal growthYour work will make a real impact, helping alleviate the workload for healthcare professionalsFree Germany Ticket and Urban Sports Club membership30 days of vacation – plus your birthday off✨ Ready to talk? Apply now! 🚀We look forward to your application and can’t wait to meet you – no matter who you are or what background you have!
Machine Learning Engineer
Data Science & Analytics
Computer Vision Engineer
Software Engineering
Apply
Hidden link
Liquid AI.jpg

Member of Technical Staff - ML Research Engineer, Foundation Model Data

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You want to play a critical role in our foundation model development process, focusing on consolidating, gathering, and generating high-quality text data for pretraining, midtraining, SFT, and preference optimizationRequired Experience:Experience Level: B.S. + 5 years experience or M.S. + 3 years experience or Ph.D. + 1 year of experienceDataset Engineering: Expertise in data curation, cleaning, augmentation, and synthetic data generation techniquesMachine Learning Expertise: Ability to write and debug models in popular ML frameworks, and experience working with LLMsSoftware Development: Strong programming skills in Python, with an emphasis on writing clean, maintainable, and scalable codeDesired Experience:M.S. or Ph.D. in Computer Science, Electrical Engineering, Math, or a related field.Experience fine-tuning or customizing LLMsFirst-author publications in top ML conferences (e.g. NeurIPS, ICML, ICLR).Contributions to popular open-source projectsWhat You'll Actually Do:Create and maintain data cleaning, filtering, selection pipeline than can handle >100TB of dataWatch out for the release of public dataset on huggingface and other platformsCreate crawlers to gather datasets from the web where public data is lackingWrite and maintain synthetic data generation pipelinesRun ablations to assess new dataset and judging pipelinesWhat You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI companyA collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMsAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
Data Engineer
Data Science & Analytics
Apply
Hidden link
Mindrift.jpg

Freelance Ecology / Environment Science - AI Trainer

Mindrift
USD
0
0
-
50
US.svg
United States
Part-time
Remote
true
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. About the CompanyAt Mindrift, innovation meets opportunity. We believe in using the power of collective intelligence to ethically shape the future of AI.What we doThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About the RoleGenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Ecology / Environment Science, you’ll have the opportunity to collaborate on these projects.Although every project is unique, you might typically: Generate prompts that challenge AI. Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. Correct the model’s responses based on your domain-specific knowledge. How to get startedSimply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.Requirements You have a Bachelor’s degree plus 6 years of relevant experience in Ecology, Environmental Science, or a related field. You hold a Master’s or PhD in Ecology, Environmental Science, or a related field, along with 3 years of relevant work experience. Your level of English is advanced (C1) or above. You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines. Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge. BenefitsWhy this freelance opportunity might be a great fit for you? Get paid for your expertise, with rates that can go up to $50/hour depending on your skills, experience, and project needs. Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments. Work on advanced AI projects and gain valuable experience that enhances your portfolio. Influence how future AI models understand and communicate in your field of expertise.
Machine Learning Engineer
Data Science & Analytics
Apply
Hidden link
Liquid AI.jpg

Member of Technical Staff - ML Inference Engineer, Pytorch

Liquid AI
-
US.svg
United States
Full-time
Remote
true
Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.This Role Is For You If:You have experience building large-scale production stacks for model serving.You have a solid understanding of ragged batching, dynamic load balancing, KV-cache management, and other multi-tenant serving techniques.You have experience with applying quantization strategies (e.g., FP8, INT4) while safeguarding model accuracy.You have deployed models in both single-GPU and multi-GPU environments and can diagnose performance issues across the stack.Desired Experience:PyTorchPythonModel-serving frameworks (e.g. TensorRT, vLLM, SGLang)What You'll Actually Do:Optimize and productionize the end-to-end pipeline for GPU model inference around Liquid Foundation Models (LFMs).Facilitate the development of next-generation Liquid Foundation Models from the lens of GPU inference.Profile and robustify the stack for different batching and serving requirements.Build and scale pipelines for test-time compute.What You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI company.Deeper expertise in machine learning systems and efficient large model inference.Opportunity to scale pipelines that directly influence user latency and experience with Liquid's models.A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs.About Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Machine Learning Engineer
Data Science & Analytics
DevOps Engineer
Data Science & Analytics
Apply
Hidden link
No job found
There is no job in this category at the moment. Please try again later