⚠️ Sorry, this job is no longer available.

AI Software Engineer Jobs

Latest roles in AI Software Engineer, reviewed by real humans for quality and clarity.

All Jobs

Showing 6179  of 79 jobs
Glean.jpg

Manager, Delivery Excellence

Glean Work
USD
200000
130000
-
200000
US.svg
United States
Full-time
Remote
false
About Glean: Glean is the Work AI platform that helps everyone work smarter with AI. What began as the industry’s most advanced enterprise search has evolved into a full-scale Work AI ecosystem, powering intelligent Search, an AI Assistant, and scalable AI agents on one secure, open platform. With over 100 enterprise SaaS connectors, flexible LLM choice, and robust APIs, Glean gives organizations the infrastructure to govern, scale, and customize AI across their entire business - without vendor lock-in or costly implementation cycles. At its core, Glean is redefining how enterprises find, use, and act on knowledge. Its Enterprise Graph and Personal Knowledge Graph map the relationships between people, content, and activity, delivering deeply personalized, context-aware responses for every employee. This foundation powers Glean’s agentic capabilities - AI agents that automate real work across teams by accessing the industry’s broadest range of data: enterprise and world, structured and unstructured, historical and real-time. The result: measurable business impact through faster onboarding, hours of productivity gained each week, and smarter, safer decisions at every level. Recognized by Fast Company as one of the World’s Most Innovative Companies (Top 10, 2025), by CNBC’s Disruptor 50, Bloomberg’s AI Startups to Watch (2026), Forbes AI 50, and Gartner’s Tech Innovators in Agentic AI, Glean continues to accelerate its global impact. With customers across 50+ industries and 1,000+ employees in more than 25 countries, we’re helping the world’s largest organizations make every employee AI-fluent, and turning the superintelligent enterprise from concept into reality. If you’re excited to shape how the world works, you’ll help build systems used daily across Microsoft Teams, Zoom, ServiceNow, Zendesk, GitHub, and many more - deeply embedded where people get things done. You’ll ship agentic capabilities on an open, extensible stack, with the craft and care required for enterprise trust, as we bring Work AI to every employee, in every company.About the Role: Glean is seeking a talented AI Outcomes Manager to join our rapidly expanding team. The AI Outcomes Manager will play a crucial role in transforming how every department works, with the power of Glean. They will work closely with executives and end users, combining business acumen, product sense, and prompting skills to help them transform into an AI-native enterprise. You will: Partner with executive sponsors and end users to identify high‑impact use cases and turn them into measurable business outcomes on Glean. Lead strategic reviews and advise customers on their AI roadmap, ensuring they get the most value from Glean’s platform. Translate business needs into clear problem statements, success metrics, and practical AI solutions; collaborate with Product and R&D to shape priorities. Conduct discovery workshops, scope pilots, and guide rollouts, driving breadth and depth of adoption of the Glean platform. Design and build AI agents with and for customers, including rethinking and redesigning underlying business processes to maximize impact and usability. Proactively identify expansion opportunities and drive engagement across teams and functions. About you: 5+ years of professional experience in roles that blend business and technology (e.g., product, analytics, data, engineering, solutions), with a consultative, customer‑facing approach. Strong problem‑solving and communication skills; comfortable working with stakeholders from ICs to executives and tailoring messages to different audiences. Demonstrated ability to craft effective prompts and guide AI agents for real customer or business workflows; you’ve shipped outcomes, not just demos. Understanding of what current LLMs can and cannot do; able to set expectations and steer towards reliable, safe, cost‑effective solutions. Product sense and user empathy—you can spot high‑value applications of AI across diverse job functions and design clear, guided experiences. Hands‑on experience with modern AI platforms and tools (e.g., OpenAI, Claude, Mistral, Cohere or similar), with enough technical depth to work with engineers without needing to be on the critical path for writing production code. Good to have: Prior experience in customer‑facing, consultative roles (solutions, support, product management) and comfort presenting to senior leaders. Exposure to evaluating AI outcomes (e.g., defining success criteria, using sample tasks, reviewing results) and iterating for quality, latency, and cost. Ability to analyze usage signals and customer feedback to inform roadmap and drive adoption. Location: This role is remote in the U.S. Compensation & Benefits: The standard OTE range for this position is $130,000-$200,000 annually. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits. We are a diverse bunch of people and we want to continue to attract and retain a diverse range of people into our organization. We're committed to an inclusive and diverse company. We do not discriminate based on gender, ethnicity, sexual orientation, religion, civil or family status, age, disability, or race. #LI-Remote 
No items found.
Hidden link
Figure.jpg

Manufacturing Engineer

Figure AI
USD
350000
150000
-
350000
US.svg
United States
Full-time
Remote
false
Figure is an AI robotics company developing autonomous general-purpose humanoid robots. The goal of the company is to ship humanoid robots with human level intelligence. Its robots are engineered to perform a variety of tasks in the home and commercial markets. Figure is headquartered in San Jose, CA. Figure’s vision is to deploy autonomous humanoids at a global scale. Our Helix team is looking for an experienced Training Infrastructure Engineer, to take our infrastructure to the next level. This role is focused on managing the training cluster, implementing distributed training algorithms, data loaders, and developer tools for AI researchers. The ideal candidate has experience building tools and infrastructure for a large-scale deep learning system. Responsibilities Design, deploy, and maintain Figure's training clusters Architect and maintain scalable deep learning frameworks for training on massive robot datasets Work together with AI researchers to implement training of new model architectures at a large scale Implement distributed training and parallelization strategies to reduce model development cycles Implement tooling for data processing, model experimentation, and continuous integration Requirements Strong software engineering fundamentals Bachelor's or Master's degree in Computer Science, Robotics, Engineering, or a related field Experience with Python and PyTorch Experience managing HPC clusters for deep neural network training Minimum of 4 years of professional, full-time experience building reliable backend systems Bonus Qualifications Experience managing cloud infrastructure (AWS, Azure, GCP) Experience with job scheduling / orchestration tools (SLURM, Kubernetes, LSF, etc.) Experience with configuration management tools (Ansible, Terraform, Puppet, Chef, etc.) The US base salary range for this full-time position is between $150,000 - $350,000 annually. The pay offered for this position may vary based on several individual factors, including job-related knowledge, skills, and experience. The total compensation package may also include additional components/benefits depending on the specific role. This information will be shared if an employment offer is extended.
No items found.
Hidden link
Snorkel AI.jpg

Product Manager — Summer Intern

Snorkel AI
0
0
-
0
US.svg
United States
Intern
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Internship As a Research Intern at Snorkel AI, you’ll contribute to internal research and academic collaborations—helping explore and validate new ideas that may shape future publications, open-source artifacts, and long-term product directions. This is a research-first role, designed for interns who want to do deep technical work with real-world relevance. You’ll work closely with Snorkel researchers on open-ended projects and produce clear research outputs (experiments, prototypes, internal writeups, and potentially publications depending on project fit and timing). What You’ll Do Develop and evaluate new methods for data development for foundation models and enterprise AI systems (e.g., dataset construction, augmentation, synthetic data, and evaluation). Research supervision and evaluation techniques such as rubrics and verifiable rewards. Design experiments and run rigorous empirical studies (ablations, benchmarks, error analysis). Build lightweight research prototypes and tooling in Python to support internal studies. Collaborate with academic partners and internal research teams—reading papers, proposing hypotheses, and iterating quickly. Example Project Areas Projects vary by mentor and collaboration needs, but may include: Synthetic data generation + filtering for specialized tasks Evaluation datasets and benchmarks for LLM / RAG / agent behavior Data-centric methods for improving reliability, calibration, and failure-mode coverage Evaluating HITL data annotation processes, gaps, and improvements What We’re Looking For Current student in a PhD in ML/AI/CS. Strong ML fundamentals and demonstrated research ability (papers, preprints, or substantial research artifacts). Excellent Python skills and experience with modern ML tooling (PyTorch, NumPy, etc.). Ability to operate in ambiguous, research-driven problem spaces with strong experimentation habits. Clear communication: can write concise technical notes and present findings. Nice to have Prior work on evaluation, data curation, synthetic data, weak supervision, NLP, or multimodal ML. Experience collaborating with academic labs or participating in research programs. Internship Details Duration: Summer (flexible start/end) Location: Hybrid (Redwood City/SF) or Remote (US) Compensation: Competitive, commensurate with experience Why Snorkel Research Work with a research org advancing data-centric AI and foundation-model development, in close partnership with labs and enterprises.  Join a company with deep research roots and a substantial body of peer-reviewed work in this space.  Get mentorship and ownership on projects that can influence long-term research direction. Be Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
Snorkel AI.jpg

Software Engineer — Summer Intern

Snorkel AI
0
0
-
0
US.svg
United States
Intern
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Internship As a Research Intern at Snorkel AI, you’ll contribute to internal research and academic collaborations—helping explore and validate new ideas that may shape future publications, open-source artifacts, and long-term product directions. This is a research-first role, designed for interns who want to do deep technical work with real-world relevance. You’ll work closely with Snorkel researchers on open-ended projects and produce clear research outputs (experiments, prototypes, internal writeups, and potentially publications depending on project fit and timing). What You’ll Do Develop and evaluate new methods for data development for foundation models and enterprise AI systems (e.g., dataset construction, augmentation, synthetic data, and evaluation). Research supervision and evaluation techniques such as rubrics and verifiable rewards. Design experiments and run rigorous empirical studies (ablations, benchmarks, error analysis). Build lightweight research prototypes and tooling in Python to support internal studies. Collaborate with academic partners and internal research teams—reading papers, proposing hypotheses, and iterating quickly. Example Project Areas Projects vary by mentor and collaboration needs, but may include: Synthetic data generation + filtering for specialized tasks Evaluation datasets and benchmarks for LLM / RAG / agent behavior Data-centric methods for improving reliability, calibration, and failure-mode coverage Evaluating HITL data annotation processes, gaps, and improvements What We’re Looking For Current student in a PhD in ML/AI/CS. Strong ML fundamentals and demonstrated research ability (papers, preprints, or substantial research artifacts). Excellent Python skills and experience with modern ML tooling (PyTorch, NumPy, etc.). Ability to operate in ambiguous, research-driven problem spaces with strong experimentation habits. Clear communication: can write concise technical notes and present findings. Nice to have Prior work on evaluation, data curation, synthetic data, weak supervision, NLP, or multimodal ML. Experience collaborating with academic labs or participating in research programs. Internship Details Duration: Summer (flexible start/end) Location: Hybrid (Redwood City/SF) or Remote (US) Compensation: Competitive, commensurate with experience Why Snorkel Research Work with a research org advancing data-centric AI and foundation-model development, in close partnership with labs and enterprises.  Join a company with deep research roots and a substantial body of peer-reviewed work in this space.  Get mentorship and ownership on projects that can influence long-term research direction. Be Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
Snorkel AI.jpg

AI Researcher — Summer Intern

Snorkel AI
0
0
-
0
US.svg
United States
Intern
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Internship As a Research Intern at Snorkel AI, you’ll contribute to internal research and academic collaborations—helping explore and validate new ideas that may shape future publications, open-source artifacts, and long-term product directions. This is a research-first role, designed for interns who want to do deep technical work with real-world relevance. You’ll work closely with Snorkel researchers on open-ended projects and produce clear research outputs (experiments, prototypes, internal writeups, and potentially publications depending on project fit and timing). What You’ll Do Develop and evaluate new methods for data development for foundation models and enterprise AI systems (e.g., dataset construction, augmentation, synthetic data, and evaluation). Research supervision and evaluation techniques such as rubrics and verifiable rewards. Design experiments and run rigorous empirical studies (ablations, benchmarks, error analysis). Build lightweight research prototypes and tooling in Python to support internal studies. Collaborate with academic partners and internal research teams—reading papers, proposing hypotheses, and iterating quickly. Example Project Areas Projects vary by mentor and collaboration needs, but may include: Synthetic data generation + filtering for specialized tasks Evaluation datasets and benchmarks for LLM / RAG / agent behavior Data-centric methods for improving reliability, calibration, and failure-mode coverage Evaluating HITL data annotation processes, gaps, and improvements What We’re Looking For Current student in a PhD in ML/AI/CS. Strong ML fundamentals and demonstrated research ability (papers, preprints, or substantial research artifacts). Excellent Python skills and experience with modern ML tooling (PyTorch, NumPy, etc.). Ability to operate in ambiguous, research-driven problem spaces with strong experimentation habits. Clear communication: can write concise technical notes and present findings. Nice to have Prior work on evaluation, data curation, synthetic data, weak supervision, NLP, or multimodal ML. Experience collaborating with academic labs or participating in research programs. Internship Details Duration: Summer (flexible start/end) Location: Hybrid (Redwood City/SF) or Remote (US) Compensation: Competitive, commensurate with experience Why Snorkel Research Work with a research org advancing data-centric AI and foundation-model development, in close partnership with labs and enterprises.  Join a company with deep research roots and a substantial body of peer-reviewed work in this space.  Get mentorship and ownership on projects that can influence long-term research direction. Be Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
Scale AI.jpg

Enterprise Account Executive

Scale AI
USD
230000
190000
-
230000
US.svg
United States
Full-time
Remote
false
About the role We’re hiring an AI Architect to sit at the intersection of frontier AI research, product, and go-to-market. You’ll partner closely with ML teams in high-stakes meetings, scope and pitch solutions to top AI labs, and translate research needs (post-training, evals, alignment) into clear product roadmaps and measurable outcomes. You’ll drive end-to-end delivery—partnering with AI research teams and core customers to scope, pilot, and iterate on frontier model improvements—while coordinating with engineering, ops, and finance to translate cutting-edge research into deployable, high-impact solutions. What you’ll do Translate research → product: work with client side researchers on post-training, evals, safety/alignment and build the primitives, data, and tooling they need. Partner deeply with core customers and frontier labs: work hands-on with leading AI teams and frontier research labs to tackle hard, open-ended technical problems related to frontier model improvement, performance, and deployment. Shape and propose model improvement work: translate customer and research objectives into clear, technically rigorous proposals—scoping post-training, evaluation, and safety work into well-defined statements of work and execution plans. Translate research into production impact: collaborate with customer-side researchers on post-training, evaluations, and alignment, and help design the data, primitives, and tooling required to improve frontier models in practice. Own the end-to-end lifecycle: lead discovery, write crisp PRDs and technical specs, prioritize trade-offs, run experiments, ship initial solutions, and scale successful pilots into durable, repeatable offerings. Lead complex, high-stakes engagements: independently run technical working sessions with senior customer stakeholders; define success metrics; surface risks early; and drive programs to measurable outcomes. Partner across Scale: collaborate closely with research (agents, browser/SWE agents), platform, operations, security, and finance to deliver reliable, production-grade results for demanding customers. Build evaluation rigor at the frontier: design and stand up robust evaluation frameworks (e.g., RLVR, benchmarks), close the loop with data quality and feedback, and share learnings that elevate technical execution across accounts. You have Deep technical background in applied AI/ML: 5–10+ years in research, engineering, solutions engineering, or technical product roles working on LLMs or multimodal systems, ideally in high-stakes, customer-facing environments. Hands-on experience with model improvement workflows: demonstrated experience with post-training techniques, evaluation design, benchmarking, and model quality iteration. Ability to work on hard, ambiguous technical problems: proven track record of partnering directly with advanced customers or research teams to scope, reason through, and execute on deep technical challenges involving frontier models. Strong technical fluency: you can read papers, interrogate metrics, write or review complex Python/SQL for analysis, and reason about model-data trade-offs. Executive presence with world-class researchers and enterprise leaders; excellent writing and storytelling. Bias to action: you ship, learn, and iterate. How you’ll work Customer-obsessed: start from real research needs; prototype quickly; validate with data. Cross-functional by default: align research, engineering, ops, and GTM on a single plan; communicate clearly up and down. Field-forward: expect regular customer time and research leads; light travel as needed. What success looks like Clear wins with top labs: pilots that convert to scaled programs with strong eval signals. Reusable alignment & eval building blocks that shorten time-to-value across accounts. Crisp internal docs (PRDs, experiment readouts, exec updates) that drive decisions quickly. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:$190,000—$230,000 USDPLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants. About Us: At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications. We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.  We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information. We comply with the United States Department of Labor's Pay Transparency provision.  PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
No items found.
Hidden link
Snorkel AI.jpg

Applied AI Engineer - Dubai

Snorkel AI
0
0
-
0
US.svg
United States
Intern
Remote
false
About Snorkel At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler!About the Internship As a Research Intern at Snorkel AI, you’ll contribute to internal research and academic collaborations—helping explore and validate new ideas that may shape future publications, open-source artifacts, and long-term product directions. This is a research-first role, designed for interns who want to do deep technical work with real-world relevance. You’ll work closely with Snorkel researchers on open-ended projects and produce clear research outputs (experiments, prototypes, internal writeups, and potentially publications depending on project fit and timing). What You’ll Do Develop and evaluate new methods for data development for foundation models and enterprise AI systems (e.g., dataset construction, augmentation, synthetic data, and evaluation). Research supervision and evaluation techniques such as rubrics and verifiable rewards. Design experiments and run rigorous empirical studies (ablations, benchmarks, error analysis). Build lightweight research prototypes and tooling in Python to support internal studies. Collaborate with academic partners and internal research teams—reading papers, proposing hypotheses, and iterating quickly. Example Project Areas Projects vary by mentor and collaboration needs, but may include: Synthetic data generation + filtering for specialized tasks Evaluation datasets and benchmarks for LLM / RAG / agent behavior Data-centric methods for improving reliability, calibration, and failure-mode coverage Evaluating HITL data annotation processes, gaps, and improvements What We’re Looking For Current student in a PhD in ML/AI/CS. Strong ML fundamentals and demonstrated research ability (papers, preprints, or substantial research artifacts). Excellent Python skills and experience with modern ML tooling (PyTorch, NumPy, etc.). Ability to operate in ambiguous, research-driven problem spaces with strong experimentation habits. Clear communication: can write concise technical notes and present findings. Nice to have Prior work on evaluation, data curation, synthetic data, weak supervision, NLP, or multimodal ML. Experience collaborating with academic labs or participating in research programs. Internship Details Duration: Summer (flexible start/end) Location: Hybrid (Redwood City/SF) or Remote (US) Compensation: Competitive, commensurate with experience Why Snorkel Research Work with a research org advancing data-centric AI and foundation-model development, in close partnership with labs and enterprises.  Join a company with deep research roots and a substantial body of peer-reviewed work in this space.  Get mentorship and ownership on projects that can influence long-term research direction. Be Your Best at Snorkel Joining Snorkel AI means becoming part of a company that has market proven solutions, robust funding, and is scaling rapidly—offering a unique combination of stability and the excitement of high growth. As a member of our team, you’ll have meaningful opportunities to shape priorities and initiatives, influence key strategic decisions, and directly impact our ongoing success. Whether you’re looking to deepen your technical expertise, explore leadership opportunities, or learn new skills across multiple functions, you’re fully supported in building your career in an environment designed for growth, learning, and shared success. Snorkel AI is proud to be an Equal Employment Opportunity employer and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. Snorkel AI embraces diversity and provides equal employment opportunities to all employees and applicants for employment. Snorkel AI prohibits discrimination and harassment of any type on the basis of race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local law. All employment is decided on the basis of qualifications, performance, merit, and business need. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
No items found.
Hidden link
Glean.jpg

Enterprise Account Executive - Italy

Glean Work
USD
200000
130000
-
200000
No items found.
Full-time
Remote
false
About Glean: Glean is the Work AI platform that helps everyone work smarter with AI. What began as the industry’s most advanced enterprise search has evolved into a full-scale Work AI ecosystem, powering intelligent Search, an AI Assistant, and scalable AI agents on one secure, open platform. With over 100 enterprise SaaS connectors, flexible LLM choice, and robust APIs, Glean gives organizations the infrastructure to govern, scale, and customize AI across their entire business - without vendor lock-in or costly implementation cycles. At its core, Glean is redefining how enterprises find, use, and act on knowledge. Its Enterprise Graph and Personal Knowledge Graph map the relationships between people, content, and activity, delivering deeply personalized, context-aware responses for every employee. This foundation powers Glean’s agentic capabilities - AI agents that automate real work across teams by accessing the industry’s broadest range of data: enterprise and world, structured and unstructured, historical and real-time. The result: measurable business impact through faster onboarding, hours of productivity gained each week, and smarter, safer decisions at every level. Recognized by Fast Company as one of the World’s Most Innovative Companies (Top 10, 2025), by CNBC’s Disruptor 50, Bloomberg’s AI Startups to Watch (2026), Forbes AI 50, and Gartner’s Tech Innovators in Agentic AI, Glean continues to accelerate its global impact. With customers across 50+ industries and 1,000+ employees in more than 25 countries, we’re helping the world’s largest organizations make every employee AI-fluent, and turning the superintelligent enterprise from concept into reality. If you’re excited to shape how the world works, you’ll help build systems used daily across Microsoft Teams, Zoom, ServiceNow, Zendesk, GitHub, and many more - deeply embedded where people get things done. You’ll ship agentic capabilities on an open, extensible stack, with the craft and care required for enterprise trust, as we bring Work AI to every employee, in every company.About the Role: Glean is seeking a talented AI Outcomes Manager to join our rapidly expanding team. The AI Outcomes Manager will play a crucial role in transforming how every department works, with the power of Glean. They will work closely with executives and end users, combining business acumen, product sense, and prompting skills to help them transform into an AI-native enterprise. You will: Partner with executive sponsors and end users to identify high‑impact use cases and turn them into measurable business outcomes on Glean. Lead strategic reviews and advise customers on their AI roadmap, ensuring they get the most value from Glean’s platform. Translate business needs into clear problem statements, success metrics, and practical AI solutions; collaborate with Product and R&D to shape priorities. Conduct discovery workshops, scope pilots, and guide rollouts, driving breadth and depth of adoption of the Glean platform. Design and build AI agents with and for customers, including rethinking and redesigning underlying business processes to maximize impact and usability. Proactively identify expansion opportunities and drive engagement across teams and functions. About you: 5+ years of professional experience in roles that blend business and technology (e.g., product, analytics, data, engineering, solutions), with a consultative, customer‑facing approach. Strong problem‑solving and communication skills; comfortable working with stakeholders from ICs to executives and tailoring messages to different audiences. Demonstrated ability to craft effective prompts and guide AI agents for real customer or business workflows; you’ve shipped outcomes, not just demos. Understanding of what current LLMs can and cannot do; able to set expectations and steer towards reliable, safe, cost‑effective solutions. Product sense and user empathy—you can spot high‑value applications of AI across diverse job functions and design clear, guided experiences. Hands‑on experience with modern AI platforms and tools (e.g., OpenAI, Claude, Mistral, Cohere or similar), with enough technical depth to work with engineers without needing to be on the critical path for writing production code. Good to have: Prior experience in customer‑facing, consultative roles (solutions, support, product management) and comfort presenting to senior leaders. Exposure to evaluating AI outcomes (e.g., defining success criteria, using sample tasks, reviewing results) and iterating for quality, latency, and cost. Ability to analyze usage signals and customer feedback to inform roadmap and drive adoption. Location: This role is remote in the U.S. Compensation & Benefits: The standard OTE range for this position is $130,000-$200,000 annually. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits. We are a diverse bunch of people and we want to continue to attract and retain a diverse range of people into our organization. We're committed to an inclusive and diverse company. We do not discriminate based on gender, ethnicity, sexual orientation, religion, civil or family status, age, disability, or race. #LI-Remote 
No items found.
Hidden link
Helsing.jpg

Manufacturing Engineer - Production

helsing
0
0
-
0
No items found.
Full-time
Remote
false
Who we are Helsing is a defense AI company. Our mission is to protect our democracies. We aim to achieve technological leadership, so that open societies can continue to make sovereign decisions and control their ethical standards.  As democracies, we believe we have a special responsibility to be thoughtful about the development and deployment of powerful technologies like AI. We take this responsibility seriously.  We are an ambitious and committed team of engineers, AI specialists and customer-facing program managers. We are looking for mission-driven people to join our European teams – and apply their skills to solve the most complex and impactful problems. We embrace an open and transparent culture that welcomes healthy debates on the use of technology in defense, its benefits, and its ethical implications.  The role At Helsing we deliver AI-based capabilities and the enabling infrastructure that allow semi-autonomous platforms to localize, navigate, and perceive the world in real time. You will have the unique opportunity to shape the future of AI in one of the most challenging sectors, where performance needs to be paired up with high generalization capabilities and strong robustness against adversarial attacks.  The day-to-day You will develop ML/AI that leverage and extend the latest state-of-the-art methods and architectures, as well as design experiments and conduct benchmarks to evaluate and improve their performance in real-world scenarios. You will be a part of impactful projects and will collaborate with people across several teams and backgrounds to integrate cutting edge ML/AI in our production systems.  You should apply if you Hold an MSc in artificial intelligence, machine learning, computer science or a related field with experience in conceptualizing, implementing, and thoroughly evaluating advanced AI-based systems.  Have experience with the complete machine learning pipeline, from collecting and annotating data, to evaluating model architectures, and iteratively training and tuning models to meet performance requirements.  Have excellent communication skills and the ability to report and present research findings clearly and efficiently both internally and externally.  Possess solid software engineering skills, writing clean and well-structured code in Python and/or languages like Rust, Java, or modern C++, and experience deploying AI software to production including testing, QA, and monitoring.  Have developed or implemented AI applications in defense and broadly understand defense specific use cases.   Note: We encourage you to apply even if you don’t meet all the listed qualifications; ability and impact cannot be summarized in a few bullet points. Nice to Have Designed, developed, evaluated state of the art AI methods on edge devices with limited compute resources and led their end-to-end product delivery Experience with simulators, emulators, or synthetic data generators A US security clearance at the Secret or above level  Join Helsing and work with world-leading experts in their fields  Helsing’s work is important. You’ll be directly contributing to the protection of democratic countries while balancing both ethical and geopolitical concerns The work is unique. We operate in a domain that has highly unusual technical requirements and constraints, and where robustness, safety, and ethical considerations are vital. You will face unique Engineering and AI challenges that make a meaningful impact in the world Our work frequently takes us right up to the state of the art in technical innovation, be it reinforcement learning, distributed systems, generative AI, or deployment infrastructure. The defense industry is entering the most exciting phase of the technological development curve. Advances in our field of world are not incremental: Helsing is part of, and often leading, historic leaps forward In our domain, success is a matter of order-of-magnitude improvements and novel capabilities. This means we take bets, aim high, and focus on big opportunities. Despite being a relatively young company, Helsing has already been selected for multiple significant government contracts We actively encourage healthy, proactive, and diverse debate internally about what we do and how we choose to do it. Teams and individual engineers are trusted (and encouraged) to practice responsible autonomy and critical thinking, and to focus on outcomes, not conformity. At Helsing you will have a say in how we (and you!) work, the opportunity to engage on what does and doesn’t work, and to take ownership of aspects of our culture that you care deeply about What we offer A focus on outcomes, not time-tracking A generous compensation and benefits package (in addition to base salary) that includes, but may not be limited to, insurance coverage (medical and travel), flexible paid time off, paid holidays, and remote and/or hybrid work available depending on position. All compensation and benefits are subject to the terms and conditions of the underlying plans or programs, as applicable and as may be amended, terminated or superseded from time to time.   #LI-DNI   Helsing is an Equal Opportunity Employer. We will consider all qualified applicants without regard to race, color, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status, genetics, or any other characteristic protected by applicable federal, state, or local law.  Helsing's Candidate Privacy and Confidentiality Regime can be found here.     
No items found.
Hidden link
TrustLab.jpg

AI Engineer - San Mateo, CA

Trustlab
0
0
-
0
US.svg
United States
Full-time
Remote
false
Who we are:TrustLab deploys cutting edge solutions to evaluate AI agents, models and apps for enterprise customers. With a 5 year track record working with large and small clients incl.social media companies and digital market places, and guided by founders who previously worked in senior leadership positions at Google, YouTube, TikTok, and Reddit, we are creating industry leading LLM based solutions for agentic system evaluation and labeling. Our approach includes human-in-the-loop and LLM-as-a-judge technologies, with a focus on rapid innovation and production level scaling. You’ll join a small, mission-driven team where your contributions have a direct impact on real-world issues.What you’ll do: At TrustLab, your work won’t live in theory - it will power live systems used at large scale. You’ll work as part of a team that develops, tunes, and optimizes LLM-driven solutions that interpret and reason about complex digital content, while experimenting rapidly from design to deployment and seeing immediate feedback from real-world use cases. Partnering closely with other engineers, researchers, and product leaders, you’ll develop approaches to model training and evaluation, participating from early R&D through to production launches, and ensuring your work directly shapes how millions of people experience AI-powered content.Key Responsibilities:Train, evaluate, and monitor new and improved LLMs and other algorithmic modelsTest and deploy content moderation models in production, and iterate based on real-world performance metrics and feedback loops.Ensure results delivered to customers, pushing for change in approach where needed and be proactive in cross-functional execution.What we’re looking for:Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Proficiency in Python. Experience with AWS and CI/CD processes & tools is a strong plus.Experience with prompt-engineering techniques and familiarity with multiple LLM providers.industry experience in NLP / Computer Vision, or making LLM’s work in production for non-trivial use cases.Hands-on experience with debugging issues in production environments, especially on AWS.Ttrack record delivering results under time and resource pressureWhy Join Us?Work with a group of renown industry leaders in AI and Online Safety to shape the future of the industry.Ample opportunity and support for growth.Apply AI technology to real-world business use cases at a significant scale, with blue chip customersWork as part of a team where you can know everyone, but don’t have to do everyone’s job.Competitive compensation, comprehensive benefits, and hybrid in-office policy.
No items found.
Hidden link
TrustLab.jpg

Senior AI Engineer - San Mateo, CA

Trustlab
0
0
-
0
No items found.
Remote
false
Who we are:TrustLab deploys cutting edge solutions to evaluate AI agents, models and apps for enterprise customers. With a 5 year track record working with large and small clients including social media companies and digital market places, and guided by founders who previously worked in senior leadership positions at Google, YouTube, TikTok, and Reddit, we are creating industry leading LLM based solutions for agentic system evaluation and labeling. Our approach includes human-in-the-loop and LLM-as-a-judge technologies, with a focus on rapid innovation and production level scaling. You’ll join a small, mission-driven team where your contributions have a direct impact on real-world issues.What you’ll do: At TrustLab, your work won’t live in theory - it will power live systems used at large scale. You’ll develop, tune, and optimize LLM-driven solutions that interpret and reason about complex digital content, while experimenting rapidly from design to deployment and seeing immediate feedback from real-world use cases. Partnering closely with other engineers, researchers, and product leaders, you’ll pioneer new approaches to model training and evaluation, taking ownership from early R&D through to production launches, and ensuring your work directly shapes how millions of people experience AI-powered content.Key Responsibilities:Train, evaluate, and monitor new and improved LLMs and other algorithmic modelsTest and deploy content moderation models in production, and iterate based on real-world performance metrics and feedback loops.Develop medium to long-term vision for content understanding-related R&D, working with management, product, policy & operations, and engineering teams.Take ownership of results delivered to customers, pushing for change in approach where needed and taking the lead on cross-functional execution.What we’re looking for:Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Ph.D. is a plus. Proficiency in Python. Experience with AWS and CI/CD processes & tools is a strong plus.Experience with prompt-engineering techniques and familiarity with multiple LLM providers.Several years of industry experience in NLP / Computer Vision, or making LLM’s work in production for non-trivial use cases, incl. familiarity with evaluation metrics for classification tasks and best practices for handling imbalanced datasets.Hands-on experience with debugging issues in production environments, especially on AWS.Strong track record delivering results under time and resource pressureWhy Join Us?Work with a group of renown industry leaders in AI and Online Safety to shape the future of the industry.Ample opportunity and support for growth, as a technical individual contributor, or manager.Apply AI technology to real-world business use cases at a significant scale, with blue chip customersWork as part of a team where you can know everyone, but don’t have to do everyone’s job.Competitive compensation, comprehensive benefits, and hybrid in-office policy.
No items found.
Hidden link
OpenAI.jpg

Software Engineer, Full Stack (Knowledge Innovation)

OpenAI
USD
405000
255000
-
405000
US.svg
United States
Full-time
Remote
false
About the Team The Knowledge Innovation team is scaling OpenAI with OpenAI. We are building an AI powered knowledge system that evolves and learns as our products, systems and customers evolve. We leverage our state of the art models, technologies, and products (some external, some still in the lab) to assist or completely automate robust operations supporting both internal and external customers. We support OpenAI customers and internal partners globally, powering systems from customer support to integrity to product insights. We are a self-contained multi-disciplinary team, who enjoy a lightning fast feedback loop with customers at scale, some of whom sit just a few pods away. We iterate fast, and engineer for reliable long-term impact. We're constantly looking for the similarities and patterns in different types of work, and focus on building simple primitives, to apply world class knowledge to many domains. The work of this team exemplifies use of OpenAI technologies. We build systems so everyone can see the leverage that is possible with well designed AI-based implementations. We do this by working through internal use cases focused on Customers (specifically knowledge systems, automation systems, and automated agent systems) to prove impact, then we scale. About the RoleWe’re looking for Full Stack Engineers who're passionate about blending production-ready platform architecture with new tech and new paradigms. You’ll push the boundaries of OpenAI’s newest technologies to enable interactions and automations that are not only functional, but delightful. We value proactive, customer-centric engineers who can get the foundational details right (data models, architecture, security) in service of enabling great products. In this role, you will:Own the end-to-end development lifecycle for new platform capabilities and integrations with other systemsCollaborate closely with engineers, data scientists, information systems architects, and internal customers to understand their problems and implement effective solutionsWork with product and research team to share relevant feedback and iterate on applying their latest modelsAbout OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Hidden link
Genies.jpg

Machine Learning Engineer: ML Infra and Model Optimization

Genies
USD
50
40
-
50
US.svg
United States
Intern
Remote
false
Genies is an avatar technology company powering the next era of interactive digital identity through AI companions. With the Avatar Framework and intuitive creation tools, Genies enables developers, talent, and creators to generate and deploy game-ready AI companions. The company’s technology stack supports full customization, AI-generated fashion and props, and seamless integration of user-generated content (UGC). Backed by investors including Bob Iger, Silver Lake, BOND, and NEA, Genies’ mission is to become the visual and interactive layer for the LLM-powered internet. About the opportunity We are looking for a Backend Software Engineer Intern (LLM) to join our AI Engineering Team based in San Francisco, CA or Los Angeles, CA. The team is responsible for developing the backend infrastructure powering the Genies Avatar AI framework. You will contribute to the next generation of AI 3D avatar entertainment experience, and be involved with designing, coding, and testing software according to the requirements and system plans. You will be expected to collaborate with senior engineers and other team members to develop software solutions, troubleshoot issues, and maintain the quality of our software. You will also be responsible for documenting their work for future reference and improvement. Our internship program has a minimum duration of 12 weeks. Key Responsibilities Develop and deploy LLM agent systems within our AI-powered avatar framework. Design and implement scalable and efficient backend systems to support AI applications Collaborate with AI and NLP experts to integrate LLM, and LLM-based systems and algorithms into our avatar ecosystem. Work with Docker, Kubernetes, and AWS for AI model deployment and scalability Contribute to code reviews, debugging, and testing to ensure high-quality deliverables. Minimum Qualifications Currently pursuing OR a recent graduate from a  Master's degree or Bachelor's in Computer Science, Engineering, Machine Learning, or related field. Course or internship experience related to the following areas : Operating Systems, Data Structures & Algorithms, Machine Learning Strong programming skills in Python, Java, or C++ Excellent written and verbal communication skills Basic understanding of AI/LLM concepts and enthusiasm for learning advanced techniques. Preferred Qualifications Experience in building ML /LLM powered software systems. Previous Computer Science/Software Engineering Internship experience Solid understanding of LLM agents, retrieval-augmented generation (RAG), and prompt engineering. Experience with AWS, Docker and Kubernetes Experience with  CI/CD pipelines Experience with API design, schema design Here's why you'll love working at Genies: Salary $40-$50 per hour. You'll work with a team that you’ll be able to learn from and grow with, including support for your own professional development You'll be at the helm of your own career, shaping it with your own innovative contributions to a nascent team and product with flexible hours and a hybrid(office+home) policy You'll enjoy the culture and perks of a startup, with the stability of being well funded  Flexible paid time off, sick time, and paid company holidays, in addition to paid parental leave, bereavement leave, and jury duty leave for full-time employees Health & wellness support through programs such as monthly wellness reimbursement   Choice of MacBook or windows laptop Genies is an equal opportunity employer committed to promoting an inclusive work environment free of discrimination and harassment. We value diversity, inclusion, and aim to provide a sense of belonging for everyone. 
No items found.
Hidden link
Cartesia.jpg

Machine Learning Engineer, Data

Cartesia
USD
250000
180000
-
250000
US.svg
United States
Full-time
Remote
false
About CartesiaOur mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.About The RoleTo build truly global AI, our models must be trained on data that reflects the world's diversity of languages and cultures. We are searching for a Machine Learning Engineer to own the quality and coverage of the data behind our models. You will be our in-house expert on global data, ensuring our models perform exceptionally well across dozens of languages. You have a keen eye for linguistic nuance, and a passion for building inclusive and representative datasets at scale.Your ImpactDesign and build large-scale datasets for model training.Build evaluations of speech models, both via manual annotation and at scale with automated metrics.Implement techniques for steering data generation to improve model intelligence through data and mitigate bias.Build automated quality control systems to validate and filter generated dataPartner with product teams to ensure support for key languages and markets.What You BringExperience building or working with large multilingual datasetsExperience with generative models (speech, text, or multimodal).Ability to help guide human annotation and evaluation across multiple languages.Strong applied ML background with a focus on data-centric approaches.Excitement for building scalable systems that bridge research and production.What We Offer🍽 Lunch, dinner and snacks at the office.🏥 Fully covered medical, dental, and vision insurance for employees.🏦 401(k).✈️ Relocation and immigration support.🦖 Your own personal Yoshi.Our Culture🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
No items found.
Hidden link
Firecrawl.jpg

AI Engineer (Partnerships)

Firecrawl
USD
180000
130000
-
180000
US.svg
United States
Full-time
Remote
false
Salary Range: $130,000-$180,000/year (Range shown is for U.S.-based employees in San Francisco, CA. Compensation outside the U.S. is adjusted fairly based on your country's cost of living.)Equity Range: Up to 0.10%Location: San Francisco, CA (Hybrid) OR RemoteJob Type: Full-Time (SF) OR Contract (Remote)Experience: 2+ yearsAbout FirecrawlFirecrawl is the easiest way to extract data from the web. Developers use us to reliably convert URLs into LLM-ready markdown or structured data with a single API call. In just a year, we've hit millions in ARR and 70k+ GitHub stars by building the fastest way for developers to get LLM-ready data.We're a small, fast-moving, technical team building essential infrastructure for the AI era. We value autonomy, clarity, and shipping fast.About the RoleWe're looking for an AI Engineer to own the technical side of our partnerships motion. Your mission: make Firecrawl the default web data API that AI agents and tools reach for. You'll work directly with emerging AI-native companies - writing prompts, building evals, and ensuring Firecrawl integrations just work.What You'll DoCraft and iterate on prompts that help AI agents reliably choose and use Firecrawl for web data tasksBuild evaluation frameworks to test prompts across different models, use cases, and edge cases - then iterate relentlessly based on resultsBe the technical partner contact in Slack channels, helping partners implement Firecrawl into their products and troubleshoot issues in real-timeTest obsessively - new models drop, agent architectures evolve, and you're on top of how Firecrawl performs across all of themCreate integration guides and templates that make it dead simple for partners to ship Firecrawl-powered featuresIdentify new partnership opportunities by understanding how AI tools are using web data and where Firecrawl fitsCollaborate with Product and Engineering to surface partner feedback and shape the roadmapWho You Are2+ years working with LLMs - you've written production prompts, understand model quirks, and know what makes agents tickYou ship code. Python, TypeScript, whatever - you can build evals, write scripts, and prototype integrations quicklyYou're a clear communicator who can help non-technical partners implement technical solutionsYou thrive in ambiguity - partnerships are messy, timelines shift, and you figure it outYou're responsive and reliable - when a partner pings in Slack, you're on itBonus: You've worked at an AI-native company or have experience with agent frameworks (LangChain, CrewAI, OpenAI Agents SDK, etc.)Bonus: You've done developer relations, solutions engineering, or technical partnerships beforeBenefits & PerksAvailable to all employeesSalary that makes sense - $130,000-180,000/year OTE (U.S.-based), based on impact, not tenureOwn a piece - Up to 0.10% equity in what you're helping buildGenerous PTO - 15 days mandatory, anything after 24 days, just ask (holidays excluded); take the time you need to rechargeParental leave - 12 weeks fully paid, for moms and dadsWellness stipend - $100/month for the gym, therapy, massages, or whatever keeps you humanLearning & Development - Expense up to $150/year toward anything that helps you grow professionallyTeam offsites - A change of scenery, minus the trust fallsSabbatical - 3 paid months off after 4 years, do something fun and newAvailable to US-based full-time employeesFull coverage, no red tape - Medical, dental, and vision (100% for employees, 50% for spouse/kids) - no weird loopholes, just care that worksLife & Disability insurance - Employer-paid short-term disability, long-term disability, and life insurance - coverage for life's curveballsSupplemental options - Optional accident, critical illness, hospital indemnity, and voluntary life insurance for extra peace of mindDoctegrity telehealth - Talk to a doctor from your couch401(k) plan - Retirement might be a ways off, but future-you will thank youPre-tax benefits - Access to FSAs and commuter benefits (US-only) to help your wallet out a bitPet insurance - Because fur babies are family tooAvailable to SF-based employeesSF HQ perks - Snacks, drinks, team lunches, intense ping pong, and peak startup energyE-Bike transportation - A loaner electric bike to get you around the city, on usInterview ProcessApplication ReviewIntro Chat (~25 min)Technical Deep Dive (~45 min)Paid Work Trial (1-2 weeks)DecisionIf you're an AI engineer who lives in Slack, obsesses over prompt quality, and wants to make Firecrawl the infrastructure layer for AI agents everywhere - let's talk.
No items found.
Hidden link
WRITER.jpg

Forward deployed engineer

Writer
USD
272000
195900
-
272000
US.svg
United States
Full-time
Remote
false
Location: NYC - This role requires being onsite at the client's office 4-5 days per week🚀 About WRITERWRITER is where the world's leading enterprises orchestrate AI-powered work. Our vision is to expand human capacity through super intelligence. And we're proving it's possible – through powerful, trustworthy AI that unites IT and business teams together to unlock enterprise-wide transformation. With WRITER's end-to-end platform, hundreds of companies like Mars, Marriott, Uber, and Vanguard are building and deploying AI agents that are grounded in their company's data and fueled by WRITER's enterprise-grade LLMs. Valued at $1.9B and backed by industry-leading investors including Premji Invest, Radical Ventures, and ICONIQ Growth, WRITER is rapidly cementing its position as the leader in enterprise generative AI.Founded in 2020 with office hubs in San Francisco, New York City, Austin, Chicago, and London, our team thinks big and moves fast, and we're looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work with AI.📐 About the roleWRITER is looking for a pioneering forward deployed engineer with strong software engineering and AI expertise to be embedded onsite with a strategic enterprise client as their dedicated technical partner. At WRITER, we believe in using the power of AI to unlock the potential of the enterprise. With the help of our Forward deployed engineers, we ensure our most complex enterprise clients successfully deploy and maintain the WRITER AI platform as is with the standard release configurations, and obtain any internal client-specific security and compliance approvals without compromising standard features.As a forward deployed engineer at WRITER, you'll be embedded onsite at the client location in NYC, working shoulder-to-shoulder with their engineering, AI Center of Excellence (COE), and business teams on a daily basis. This is a full-time role that requires being present at the client's office 4-5 days per week. You'll serve as the technical bridge between WRITER and the client, owning the client-specific release workflow from sandbox deployment through production. Your daily collaboration with client teams will ensure seamless integration, rapid issue resolution, and successful adoption of WRITER's AI capabilities across the organization. You will report to the head of customer solutions engineering.This is a unique opportunity to be the face of WRITER's technical excellence, building deep relationships with client stakeholders while solving complex AI deployment challenges in a fast-paced enterprise environment. If you're passionate about enterprise AI deployment, thrive in client-facing roles, and love being embedded in complex technical environments, then we want to hear from you.🦸🏻‍♀️ What you'll doServe as WRITER's embedded technical AI specialist onsite, building strong daily relationships with client engineering, AI COE, and business teams, and acting as the primary point of contact for all WRITER-related technical mattersOwn the end-to-end release workflow from WRITER platform updates through production deployment across all client environments, including coordinating dual-track updates, making critical go/no-go decisions, and providing hands-on troubleshootingDesign, implement, and maintain the client's custom front-end setup, production retrieval, and agentic systems, while rigorously testing, validating, and troubleshooting AI-specific features like LLM applications, RAG performance, and prompt engineeringNavigate client security, risk, and compliance approval processes, working with Information Security teams to obtain necessary approvals, documenting compliance artifacts, and streamlining workflows to meet regulatory requirementsReduce operational pain points, contribute to long-term migration strategies for clients towards standard WRITER-managed environments, and gather real-world deployment feedback to strategically influence WRITER's product direction⭐️ What you'll need7+ years of professional experience in software engineering, with at least 5 years specifically in AI/ML development and 2+ years in highly regulated, customer-facing roles at enterprise scaleExpert-level proficiency in Python and modern JavaScript (ES6+), production software development, systems design, and modern front-end frameworks (React, Vue.js, Angular)Extensive hands-on experience deploying and optimizing LLM-based applications and agentic systems in production environments, with deep knowledge of RAG implementation, CI/CD practices for AI, and evaluating AI model performanceProven track record of managing complex multi-environment release pipelines, coordinating synchronization between distributed systems, and expertly navigating enterprise approval workflows and governance in Fortune 500 or large financial services clientsDemonstrated ability to Connect effectively with diverse client stakeholders through onsite embedding, Challenge existing technical paradigms to drive innovation, and Own problem-solving from discovery to production impact, exhibiting strong analytical, communication, and presentation skills suitable for executive engagement 🍩 Benefits & perks (US Full-time employees)Generous PTO, plus company holidaysMedical, dental, and vision coverage for you and your familyPaid parental leave for all parents (12 weeks)Fertility and family planning supportEarly-detection cancer testing through GalleriFlexible spending account and dependent FSA optionsHealth savings account for eligible plans with company contributionAnnual work-life stipends for:Wellness stipend for gym, massage/chiropractor, personal training, etc.Learning and development stipendCompany-wide off-sites and team off-sitesCompetitive compensation, company stock options and 401kWRITER is an equal-opportunity employer and is committed to diversity. We don't make hiring or employment decisions based on race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other basis protected by applicable local, state or federal law. Under the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.By submitting your application on the application page, you acknowledge and agree to WRITER's Global Candidate Privacy Notice.
No items found.
Hidden link
Grammarly.jpg

Senior Financial Analyst, GTM

Grammarly
0
0
-
0
No items found.
Full-time
Remote
false
Superhuman offers a dynamic hybrid working model for this role. This flexible approach gives team members the best of both worlds: plenty of focus time along with in-person collaboration that helps foster trust, innovation, and a strong team culture. About Superhuman Grammarly is now part of Superhuman, the AI productivity platform on a mission to unlock the superhuman potential in everyone. The Superhuman suite of apps and agents brings AI wherever people work, integrating with over 1 million applications and websites. The company’s products include Grammarly’s writing assistance, Coda’s collaborative workspaces, Mail’s inbox management, and Go, the proactive AI assistant that understands context and delivers help automatically. Founded in 2009, Superhuman empowers over 40 million people, 50,000 organizations, and 3,000 educational institutions worldwide to eliminate busywork and focus on what matters. Learn more at superhuman.com and about our values here. The Opportunity To achieve our ambitious goals, we’re looking for an Applied Research Scientist with a keen interest and background in natural language processing (NLP), machine learning (ML), and deep learning (DL) to join our Agents team. The team builds user-facing features that work in all Superhuman applications. We help our users write better, faster, and more effectively while working in big groups. The person in this role will bring their unique perspective to our product development process by contributing to ideation and defining the solution space. Based on their deep grasp of ML/DL/NLP concepts, the Applied Researcher will define the approach to solving the complex problems. The role will also build the intelligent functionality of our product offerings. Superhuman's engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure. Read more about our stack or hear from our team on our technical blog. Your Impact As an Applied Research Scientist, you will have the opportunity to harness your passion for building exciting new product offerings that will impact millions of lives. To do this, you'll need to stay up-to-date on the quickly evolving field of NLP while also focusing on building production systems. In this role, you will: Develop state-of-the-art tools for correcting, improving, and enhancing written English using various NLP, ML, and DL technologies. Productize and ship these features into Superhuman's product offerings, which millions of users use daily. Stay up-to-date with the latest research trends that could improve our product. Contribute to the research strategy and technical culture of the company. Attract professionals in the industry to build a best-in-class research team that creates a state-of-the-art writing and communication assistant. Qualifications Strong theoretical foundation in ML/DL algorithms and methodologies. Minimum 6–7 years of industry experience in researching and developing ML/DL technologies for real-world products. Additional 4–5 years of research experience in an academic environment (PhD or equivalent research background preferred). Deep experience in Natural Language Processing (NLP), Generative AI, and Large Language Models (LLMs). Proven track record of applying ML/DL algorithms effectively to NLP tasks. Strong research mentality with a math-oriented mindset and ability to innovate in cutting-edge areas. Proficiency in at least one modern programming language (preferably Python). Demonstrated ability to translate research into scalable, production-ready solutions. Excellent command of spoken and written English, knowledge of additional languages is a plus. Has a demonstrated ability to work independently with minimal guidance, proactively manages tasks and priorities across multiple projects, analyzes and executes work efficiently, collaborates effectively with cross-functional teams, and thrives in fast-paced, results-driven environments. Support for you, professionally and personally Professional growth: We believe that autonomy and trust are key to empowering our team members to do their best, most innovative work in a way that aligns with their interests, talents, and well-being. We also support professional development and advancement with training, coaching, and regular feedback. A connected team: Superhuman builds a product that helps people connect, and we apply this mindset to our own team. Our remote-first hybrid model enables a highly collaborative culture. We work to foster belonging among team members in a variety of ways. This includes our employee resource groups, Superhuman Circles, which promote connection among those with shared identities including BIPOC and LGBTQIA+ team members, women, and parents. We also celebrate our colleagues and accomplishments with global, local, and team-specific programs.  Comprehensive benefits for candidates based in Germany: Superhuman offers all team members competitive pay along with a benefits package encompassing life care (including mental health care and risk benefits) and ample and defined time off. We also offer support to set up a home office, wellness and pet care stipends, learning and development opportunities, and more. Relocation Support: Superhuman provides comprehensive relocation support to make your move to Berlin seamless. Our package includes visa assistance, destination services to help you and your family settle in comfortably, and a relocation bonus to cover additional expenses, such as temporary housing. We encourage you to apply At Superhuman, we value our differences, and we encourage all to apply. Superhuman is an equal-opportunity company. We do not discriminate on the basis of race or ethnic origin, religion or belief, gender, disability, sexual identity, or age. For more details about the personal data Superhuman collects during the recruitment process, for what purposes, and how you can address your rights, please see the Superhuman Data Privacy Notice for Candidates here. #LI-Hybrid  
No items found.
Hidden link
Suno.jpg

Machine Learning Engineering Manager, Recommendations

Suno
USD
350000
280000
-
350000
US.svg
United States
Full-time
Remote
false
About SunoSuno is a music company built to amplify imagination. Powered by the world’s most advanced AI music model, Suno offers an unparalleled creative platform that includes Suno Studio, a breakthrough generative audio workstation. From shower-singers to aspiring songwriters to seasoned artists, Suno empowers a global community to create, share, and discover music—unlocking the joy of musical expression for all. About the RoleWe're looking for someone to lead recommendations at Suno. You'll be instrumental in building Suno's music discovery and recommendation systems. You'll help define how millions of users discover, create, and engage with music on our platform by shaping both the systems and the team that makes it happen.This role is for someone who has deep experience with recommendation systems at scale and is energized about building a new and better one. You're excited to apply and adapt your expertise in a new context and grow an excellent team to get there.Check out the Suno version of this role here! What You'll DoShape Suno's recommendation vision, strategy, and technical directionPartner with leaders across product, engineering, and research to decide how recommendations evolve with our platformLead the building of a full recommendations system from the ground up, from prototyping and evaluating approaches to experimentation to deploying at scaleBuild and grow a recommendations team What You'll Need5+ years building recommendation systems at scale, with at least 2+ years leading teams and owning the development of recommendation systems in productionDeep technical expertise of what's cutting edge and the ability to get there through practical, iterative stepsStrong collaboration skills and the ability to work with leaders across the company to influence directionPassion for what Suno is building and excitement about defining the future of music discoveryAdditional Notes: Applicants must be eligible to work in the US.This is an onsite role in our SF office Perks & Benefits for Full-Time EmployeesGenerous Company Equity Package401(k) with 3% Employer Match & Roth 401(k)Unlimited PTO & Sick TimeMedical, Dental, & Vision Insurance (PPO w/ HSA & FSA options)Continued / Creative Education StipendGenerous Commuter AllowanceIn-Office Lunch (5 days per week)Suno is proud to be an Equal Opportunity Employer. We consider qualified applicants without regard to race, color, ancestry, religion, sex, national origin, sexual orientation, gender identity, age, marital or family status, disability, genetic information, veteran status, or any other legally protected basis under provincial, federal, state, and local laws, regulations, or ordinances. We will also consider qualified applicants with criminal histories in a manner consistent with the requirements of state and local laws, including the Massachusetts Fair Chance in Employment Act, NYC Fair Chance Act, LA City Fair Chance Ordinance, and San Francisco Fair Chance Ordinance.
No items found.
Hidden link
Cohere Health.jpg

Member of Technical Staff, Senior/Staff MLE

Cohere
0
0
-
0
US.svg
United States
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why This Role Is DifferentThis is not a typical “Applied Scientist” or “ML Engineer” role. As a Member of Technical Staff, Applied ML, you will:Work directly with enterprise customers on problems that push LLMs to their limits. You’ll rapidly understand customer domains, design custom LLM solutions, and deliver production-ready models that solve high-value, real-world problems.Train and customize frontier models — not just use APIs. You’ll leverage Cohere’s full stack: CPT, post-training, retrieval + agent integrations, model evaluations, and SOTA modeling techniques.Influence the capabilities of Cohere’s foundation models. Techniques, datasets, evaluations, and insights you develop for customers will directly shape the next generation of Cohere’s frontier models.Operate with an early-startup level of ownership inside a frontier-model company. This role combines the breadth of an early-stage CTO with the infrastructure and scale of a deep-learning lab.Wear multiple hats, set a high technical bar, and define what Applied ML at Cohere becomes. Few roles in the industry combine application, research, customer-facing engineering, and core-model influence as directly as this one.What You’ll DoTechnical Leadership & Solution DesignLead the design and delivery of custom LLM solutions for enterprise customers.Translate ambiguous business problems into well-framed ML problems with clear success criteria and evaluation methodologies.Modeling, Customization & Foundations ContributionBuild custom models using Cohere’s foundation model stack, CPT recipes, post-training pipelines (including RLVR), and data assets.Develop SOTA modeling techniques that directly enhance model performance for customer use-cases.Contribute improvements back to the foundation-model stack — including new capabilities, tuning strategies, and evaluation frameworks.Customer-Facing Technical ImpactWork closely with enterprise customers to identify high-value opportunities where LLMs can unlock transformative impact.Provide technical leadership across discovery, scoping, modeling, deployment, agent workflows, and post-deployment iteration.Establish evaluation frameworks and success metrics for custom modeling engagements.Team Mentorship & Organizational ImpactMentor engineers across distributed teams.Drive clarity in ambiguous situations, build alignment, and raise engineering and modeling quality across the organization.You May Be a Good Fit If You Have:Technical FoundationsStrong ML fundamentals and the ability to frame complex, ambiguous problems as ML solutions.Fluency with Python and core ML/LLM frameworks.Experience working with large-scale datasets and distributed training or inference pipelines.Understanding of LLM architectures, tuning techniques (CPT, post-training), and evaluation methodologies.Demonstrated ability to meaningfully shape LLM performance.Experience & LeadershipExperience engaging directly with customers or stakeholders to design and deliver ML-powered solutions.A track record of technical leadership at a team level.A broad view of the ML research landscape and a desire to push the state of the art.MindsetBias toward action, high ownership, and comfort with ambiguity.Humility and strong collaboration instincts.A deep conviction that AI should meaningfully empower people and organizations.Join UsThis is a pivotal moment in Cohere’s history. As an MTS in Applied ML, you will define not only what we build — but how the world experiences AI. If you're excited about building custom models, solving generational problems for global organizations, and shaping frontier-model capabilities, we’d love to meet you.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
Cohere Health.jpg

Member of Technical Staff, MLE

Cohere
0
0
-
0
US.svg
United States
Full-time
Remote
false
Who are we?Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.Join us on our mission and shape the future!Why This Role Is DifferentThis is not a typical “Applied Scientist” or “ML Engineer” role. As a Member of Technical Staff, Applied ML, you will:Work directly with enterprise customers on problems that push LLMs to their limits. You’ll rapidly understand customer domains, design custom LLM solutions, and deliver production-ready models that solve high-value, real-world problems.Train and customize frontier models — not just use APIs. You’ll leverage Cohere’s full stack: CPT, post-training, retrieval + agent integrations, model evaluations, and SOTA modeling techniques.Influence the capabilities of Cohere’s foundation models. Techniques, datasets, evaluations, and insights you develop for customers will directly shape the next generation of Cohere’s frontier models.Operate with an early-startup level of ownership inside a frontier-model company. This role combines the breadth of an early-stage CTO with the infrastructure and scale of a deep-learning lab.Wear multiple hats, set a high technical bar, and define what Applied ML at Cohere becomes. Few roles in the industry combine application, research, customer-facing engineering, and core-model influence as directly as this one.What You’ll DoTechnical Leadership & Solution DesignContribute to the design and delivery of custom LLM solutions for enterprise customers.Translate ambiguous business problems into well-framed ML problems with clear success criteria and evaluation methodologies.Modeling, Customization & Foundations ContributionBuild custom models using Cohere’s foundation model stack, CPT recipes, post-training pipelines (including RLVR), and data assets.Develop SOTA modeling techniques that directly enhance model performance for customer use-cases.Contribute improvements back to the foundation-model stack — including new capabilities, tuning strategies, and evaluation frameworks.Customer-Facing Technical ImpactWork as part of Cohere’s customer facing MLE team to identify high-value opportunities where LLMs can unlock transformative impact to our enterprise customers.You May Be a Good Fit If You Have:Technical FoundationsStrong ML fundamentals and the ability to frame complex, ambiguous problems as ML solutions.Fluency with Python and core ML/LLM frameworks.Experience working with (or the ability to learn) large-scale datasets and distributed training or inference pipelines.Understanding of LLM architectures, tuning techniques (CPT, post-training), and evaluation methodologies.Demonstrated ability to meaningfully shape LLM performance.Experience & LeadershipA broad view of the ML research landscape and a desire to push the state of the art.MindsetBias toward action, high ownership, and comfort with ambiguity.Humility and strong collaboration instincts.A deep conviction that AI should meaningfully empower people and organizations.Join UsThis is a pivotal moment in Cohere’s history. As an MTS in Applied ML, you will define not only what we build — but how the world experiences AI. If you're excited about building custom models, solving generational problems for global organizations, and shaping frontier-model capabilities, we’d love to meet you.If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.Full-Time Employees at Cohere enjoy these Perks:🤝 An open and inclusive culture and work environment 🧑‍💻 Work closely with a team on the cutting edge of AI research 🍽 Weekly lunch stipend, in-office lunches & snacks🦷 Full health and dental benefits, including a separate budget to take care of your mental health 🐣 100% Parental Leave top-up for up to 6 months🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend✈️ 6 weeks of vacation (30 working days!)
No items found.
Hidden link
No job found
Your search did not match any job. Please try again
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.