AI Data Science & Analytics Jobs
Latest roles in AI Data Science & Analytics, reviewed by real humans for quality and clarity.
People also search for:
All Jobs
Showing 61 – 79 of 79 jobs
Software Engineer, macOS Core Product - Sofia, Bulgaria
Speechify
101-200
USD
200000
140000
-
200000
No items found.
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Zurich, Switzerland
Speechify
101-200
USD
200000
140000
-
200000
Switzerland
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Helsinki, Finland
Speechify
101-200
USD
200000
140000
-
200000
Finland
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Belgrade, Serbia
Speechify
101-200
USD
200000
140000
-
200000
Serbia
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Palma de Mallorca, Spain
Speechify
101-200
USD
200000
140000
-
200000
Spain
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Nuremberg, Germany
Speechify
101-200
USD
200000
140000
-
200000
Germany
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - London, United Kingdom
Speechify
101-200
USD
200000
140000
-
200000
United Kingdom
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Cambridge, United Kingdom
Speechify
101-200
USD
200000
140000
-
200000
United Kingdom
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Manchester, United Kingdom
Speechify
101-200
USD
200000
140000
-
200000
United Kingdom
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Bristol, United Kingdom
Speechify
101-200
USD
200000
140000
-
200000
United Kingdom
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
Software Engineer, macOS Core Product - Birmingham, United Kingdom
Speechify
101-200
USD
200000
140000
-
200000
United Kingdom
Full-time
Remote
false
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
An Ideal Candidate Should Have
Experience shipping Python-based services
Experience being responsible for the successful operation of a critical production service
Experience with public cloud environments, GCP preferred
Experience with Infrastructure such as Code, Docker, and containerized deployments.
Preferred: Experience deploying high-availability applications on Kubernetes.
Preferred: Experience deploying ML models to production
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
No items found.
Apply
January 9, 2026
VP of Customer Strategy
Cresta
501-1000
USD
50
30
-
50
United States
Intern
Remote
false
Cresta is on a mission to turn every customer conversation into a competitive advantage by unlocking the true potential of the contact center. Our platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster. Born from the prestigious Stanford AI lab, Cresta's co-founder and chairman is Sebastian Thrun, the genius behind Google X, Waymo, Udacity, and more. Our leadership also includes CEO, Ping Wu, the co-founder of Google Contact Center AI and Vertex AI platform, and co-founder, Tim Shi, an early member of Open AI.
Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it's at Cresta.
About the role:
We are seeking curious, detail-oriented students to join our AI Delivery team as AI Quality Assurance Interns.
This is a hands-on role where you will evaluate the behavior, accuracy, and reliability of Cresta’s AI Agents and Agent Assist systems.
Unlike traditional QA, this work sits at the intersection of:
AI behavior analysis
Intent recognition evaluation
Conversation quality assessment
Model output auditing
Light prompt testing and refinement
Responsibilities:
You will help ensure our AI systems act as intended, follow correct business logic, and provide accurate and safe outputs to end users.
As an AI Quality Assurance Intern, you will:
Execute structured test plans for AI Agent implementations
Validate that intents, entities, and workflows trigger correctly
Identify behavioral inconsistencies, misclassifications, or hallucinations
Review and validate AI-generated suggestions, summaries, and classifications
Provide actionable feedback to improve intent models and redaction/PII handling
Run bias, redaction, and transcription accuracy checks
Spot patterns or emerging issues across test sets
Contribute ideas for improving internal QA processes and tools
Qualifications We Value:
This is an excellent fit for students interested in:
AI/ML, NLP, or chatbots
Product quality and user experience
Linguistics, conversation analysis, or human–AI interaction
Exploring a career in AI evaluation, AI ops, or product QA
No advanced ML experience is required — just strong analytical thinking and curiosity about how intelligent systems behave.
Perks & Benefits:
$30-$50 per hour subject to taxes
Lunch can be expensed (up to $25) while working in the office.
PT0: 4 days
This posting will be used to fill a newly-created role.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Cresta recruiting email communications will always come from the @cresta.ai domain. Any outreach claiming to be from Cresta via other sources should be ignored. If you are uncertain whether you have been contacted by an official Cresta employee, reach out to recruiting@cresta.com.
No items found.
Apply
January 9, 2026
Forward Deployed Engineer (FDE), Life Sciences
OpenAI
5000+
USD
280000
220000
-
280000
United States
Full-time
Remote
false
About the teamOpenAI’s Forward Deployed Engineering team partners with customers to turn research breakthroughs into production systems. We operate at the intersection of customer delivery and core platform development.About the roleWe are hiring a Forward Deployed Engineer (FDE) to lead end-to-end deployments of our models inside life sciences organizations and research institutions, from early R&D through clinical and operational workflows. You will own discovery, technical scoping, system design, build, and production rollout, partnering directly with customer engineering and domain teams.You will measure success through production adoption, measurable workflow impact, and eval-driven feedback that changes product and model roadmaps. You’ll work closely with our Product, Research, Partnerships, GRC, Security, and GTM teams.This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week. We offer relocation assistance. Travel up to 50% is required.In this role you willDesign and ship production systems around models, owning integrations, data flows, reliability, and on-call readinessLead discovery and scoping from pre-sales through post-sales, including problem framing, constraints, trade-offs, and a delivery planDefine launch criteria for regulated contexts, outcome metrics, and drive adoption until you prove production impactBuild in sensitive data environments where auditability, validation, and access controls drive architecture decisionsRun evaluation loops that measure model and system quality in life science workflows to drive model and product improvementsDistill production learnings into hardened primitives, reference architectures, and templated workflows that scale across regulated life sciences environments.You might thrive in this role if youBring 5+ years of software/ML engineering or technical deployment experience with customer-facing ownership in biotech, pharma, clinical research, or scientific software; advanced degree or equivalent applied experience preferred in Biomedical Engineering, Computational Biology, Bioinformatics, or a related field.Have owned customer GenAI deployments end-to-end from scoping through production adoption and improved them through evals, error analysis, and iteration.Have delivered AI systems in trial design, regulatory writing, or scientific environments where validation, auditability, and compliance constraints shaped the system.Communicate clearly across scientific, model research, technical, and executive audiences, translating technical concepts for non-technical stakeholders with credibility.Apply systems thinking with high execution standards, consistently turning failures or escalations in regulated environments into new operating standards.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Apply
January 9, 2026
Sr. Director, Sales Development
Cresta
501-1000
USD
50
30
-
50
No items found.
Intern
Remote
false
Cresta is on a mission to turn every customer conversation into a competitive advantage by unlocking the true potential of the contact center. Our platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster. Born from the prestigious Stanford AI lab, Cresta's co-founder and chairman is Sebastian Thrun, the genius behind Google X, Waymo, Udacity, and more. Our leadership also includes CEO, Ping Wu, the co-founder of Google Contact Center AI and Vertex AI platform, and co-founder, Tim Shi, an early member of Open AI.
Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it's at Cresta.
About the role:
We are seeking curious, detail-oriented students to join our AI Delivery team as AI Quality Assurance Interns.
This is a hands-on role where you will evaluate the behavior, accuracy, and reliability of Cresta’s AI Agents and Agent Assist systems.
Unlike traditional QA, this work sits at the intersection of:
AI behavior analysis
Intent recognition evaluation
Conversation quality assessment
Model output auditing
Light prompt testing and refinement
Responsibilities:
You will help ensure our AI systems act as intended, follow correct business logic, and provide accurate and safe outputs to end users.
As an AI Quality Assurance Intern, you will:
Execute structured test plans for AI Agent implementations
Validate that intents, entities, and workflows trigger correctly
Identify behavioral inconsistencies, misclassifications, or hallucinations
Review and validate AI-generated suggestions, summaries, and classifications
Provide actionable feedback to improve intent models and redaction/PII handling
Run bias, redaction, and transcription accuracy checks
Spot patterns or emerging issues across test sets
Contribute ideas for improving internal QA processes and tools
Qualifications We Value:
This is an excellent fit for students interested in:
AI/ML, NLP, or chatbots
Product quality and user experience
Linguistics, conversation analysis, or human–AI interaction
Exploring a career in AI evaluation, AI ops, or product QA
No advanced ML experience is required — just strong analytical thinking and curiosity about how intelligent systems behave.
Perks & Benefits:
$30-$50 per hour subject to taxes
Lunch can be expensed (up to $25) while working in the office.
PT0: 4 days
This posting will be used to fill a newly-created role.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Cresta recruiting email communications will always come from the @cresta.ai domain. Any outreach claiming to be from Cresta via other sources should be ignored. If you are uncertain whether you have been contacted by an official Cresta employee, reach out to recruiting@cresta.com.
No items found.
Apply
January 9, 2026
Partner Sales Manager, Five9
Cresta
501-1000
USD
50
30
-
50
United States
Intern
Remote
false
Cresta is on a mission to turn every customer conversation into a competitive advantage by unlocking the true potential of the contact center. Our platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster. Born from the prestigious Stanford AI lab, Cresta's co-founder and chairman is Sebastian Thrun, the genius behind Google X, Waymo, Udacity, and more. Our leadership also includes CEO, Ping Wu, the co-founder of Google Contact Center AI and Vertex AI platform, and co-founder, Tim Shi, an early member of Open AI.
Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it's at Cresta.
About the role:
We are seeking curious, detail-oriented students to join our AI Delivery team as AI Quality Assurance Interns.
This is a hands-on role where you will evaluate the behavior, accuracy, and reliability of Cresta’s AI Agents and Agent Assist systems.
Unlike traditional QA, this work sits at the intersection of:
AI behavior analysis
Intent recognition evaluation
Conversation quality assessment
Model output auditing
Light prompt testing and refinement
Responsibilities:
You will help ensure our AI systems act as intended, follow correct business logic, and provide accurate and safe outputs to end users.
As an AI Quality Assurance Intern, you will:
Execute structured test plans for AI Agent implementations
Validate that intents, entities, and workflows trigger correctly
Identify behavioral inconsistencies, misclassifications, or hallucinations
Review and validate AI-generated suggestions, summaries, and classifications
Provide actionable feedback to improve intent models and redaction/PII handling
Run bias, redaction, and transcription accuracy checks
Spot patterns or emerging issues across test sets
Contribute ideas for improving internal QA processes and tools
Qualifications We Value:
This is an excellent fit for students interested in:
AI/ML, NLP, or chatbots
Product quality and user experience
Linguistics, conversation analysis, or human–AI interaction
Exploring a career in AI evaluation, AI ops, or product QA
No advanced ML experience is required — just strong analytical thinking and curiosity about how intelligent systems behave.
Perks & Benefits:
$30-$50 per hour subject to taxes
Lunch can be expensed (up to $25) while working in the office.
PT0: 4 days
This posting will be used to fill a newly-created role.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Cresta recruiting email communications will always come from the @cresta.ai domain. Any outreach claiming to be from Cresta via other sources should be ignored. If you are uncertain whether you have been contacted by an official Cresta employee, reach out to recruiting@cresta.com.
No items found.
Apply
January 9, 2026
Engineering Technical Lead Manager (TLM) - Enterprise
X AI
5000+
USD
100
45
-
100
United States
Full-time
Remote
false
About xAI
xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.About the Role
As an AI Tutor - Economics, you will be instrumental in enhancing the capabilities of our cutting-edge technologies by providing high-quality input and labels using specialized software. Your role involves collaborating closely with our technical team to support the training of new AI tasks, ensuring the implementation of innovative initiatives. You'll contribute to refining annotation tools and selecting complex problems from advanced economics domains, with a focus on macroeconomic forecasting, microeconomic incentives, and behavioral experiments. This position demands a dynamic approach to learning and adapting in a fast-paced environment, where your ability to interpret and execute tasks based on evolving instructions is crucial.
AI Tutor’s Role in Advancing xAI’s Mission
As an AI Tutor, you will play an essential role in advancing xAI's mission by supporting the training and refinement of xAI’s AI models. AI Tutors teach our AI models about how people interact and react, as well as how people approach issues and discussions in economics. To accomplish this, AI Tutors will actively participate in gathering or providing data, such as text, voice, and video data, sometimes providing annotations, recording audio, or participating in video sessions. We seek individuals who are comfortable and eager to engage in these activities as a fundamental part of the role, ensuring a strong alignment with xAI’s goals and objectives to innovate.
Scope
An AI Tutor will provide services that include labeling and annotating data in text, voice, and video formats to support AI model training. At times, this may involve recording audio or video sessions, and tutors are expected to be comfortable with these tasks as they are fundamental to the role. Such data is a job requirement to advance xAI’s mission, and AI Tutors acknowledge that all work is done for hire and owned by xAI.
Responsibilities
Use proprietary software applications to provide input/labels on defined projects.
Support and ensure the delivery of high-quality curated data.
Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.
Interact with the technical staff to help improve the design of efficient annotation tools.
Choose problems from economics fields that align with your expertise, focusing on areas like macroeconomics, microeconomics, and behavioral economics.
Regularly interpret, analyze, and execute tasks based on given instructions.
Key Qualifications
Must possess a PhD in Economics or related field
Proficiency in reading and writing, both in informal and professional English.
Outstanding communication, interpersonal, analytical, and organizational capabilities.
Solid reading comprehension skills combined with the capacity to exercise autonomous judgment even when presented with limited data/material.
Strong passion for and commitment to technological advancements and innovation in economics.
Preferred Qualifications
Possesses experience with at least one publication in a reputable economics journal or outlet.
Teaching experience as a professor.
Location & Other Expectations
This position is based in Palo Alto, CA, or fully remote.
The Palo Alto option is an in-office role requiring 5 days per week; remote positions require strong self-motivation.
If you are based in the US, please note we are unable to hire in the states of Wyoming and Illinois at this time.
We are unable to provide visa sponsorship.
Team members are expected to work from 9:00am - 5:30pm PST for the first two weeks of training and 9:00am - 5:30pm in their own timezone thereafter.
For those who will be working from a personal device, please note your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.
Compensation
$45/hour - $100/hour
The posted pay range is intended for U.S.-based candidates and depends on factors including relevant experience, skills, education, geographic location, and qualifications. For international candidates, our recruiting team can provide an estimated pay range for your location.
Benefits:
Hourly pay is just one part of our total rewards package at xAI. Specific benefits vary by country, depending on your country of residence you may have access to medical benefits. We do not offer benefits for part-time roles.xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.
No items found.
Apply
January 9, 2026
Full Stack Software Engineer - Enterprise Agents
OpenAI
5000+
USD
405000
255000
-
405000
United States
Full-time
Remote
false
About the TeamThe Enterprise Agents team is scaling OpenAI with OpenAI. Part of the B2B Applications organization, We apply our latest models to real-world problems in order to assist with or automate work across the company—then share what we learn back to the broader product and research teams. We’ve built an ecosystem of automation products that’s applied everywhere from customer operations to workplace to engineering.
We love building products for folks sitting right next to us, and we take the time to add the little big touches that delight. Our goal is to prototype fast, then build for reliable long-term impact. We're constantly looking for the similarities and patterns in different types of work, and focus on building simple, generic patterns that we can apply across many domains.
About the RoleWe’re looking for an engineer who’s passionate about blending production-ready platform architecture with new tech and new paradigms. You’ll push the boundaries of OpenAI’s newest technologies to enable interactions and automations that are not only functional, but delightful. We value proactive, product-minded engineers who can see the big picture while staying on top of the little details that define great products.
In this role, you will:Own the full product development lifecycle for new platform capabilities and product experiences end-to-endCollaborate closely with internal customers to understand their problems and implement effective solutionsWork with the research team to share relevant feedback and iterate on applying their latest models
You might thrive in this role if you:5+ years of professional engineering experience (excluding internships) in relevant roles at tech and product-driven companiesFormer founder, or early engineer at a startup who has built a product from scratch is a plusProficiency with JavaScript, React, and other web technologiesProficiency with a backend language (we use Python)Some experience with relational databases like Postgres/MySQLInterest in AI/ML (direct experience not required)Proven ability to thrive in fast-growing, product-driven companies by effectively navigating loosely defined tasks and managing competing priorities or deadlines.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.OpenAI Global Applicant Privacy PolicyAt OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
No items found.
Apply
January 9, 2026
Forward Deployed Engineer
Cartesia
51-100
USD
250000
180000
-
250000
United States
Full-time
Remote
false
About CartesiaOur mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.About the RoleWe’re hiring a Forward Deployed Engineer to advance our mission of building real-time multimodal intelligence by delivering agentic voice AI solutions directly into production for our customers.Your Impact: Be the driving force behind customer deployments, taking AI solutions from early concept and pilot to production launch with enterprise-grade reliabilityTranslate cutting-edge AI capabilities into practical, high-performance systems tailored to real-world customer needsDesign and implement agentic voice AI solutions that integrate seamlessly into customer workflows and infrastructurePrototype, iterate, and deploy AI-driven systems in close collaboration with enterprise customersWork closely with our customers to define success criteria and ensure they achieve meaningful outcomes on Cartesia’s platformYou’ll have significant autonomy to shape customer solutions and directly impact how cutting-edge AI is deployed at scale across global organizations What You BringTechnical leadership with the ability to execute and deliver zero-to-one solutions in ambiguous, customer-driven environmentsYou have an eye for identifying customer problems and opportunities and can translate them into effective AI-powered solutionsStrong engineering skills enable you to rapidly prototype solutions end to end and evolve them into scalable, production-ready systemsYou’re comfortable diving into new technologies and can quickly adapt your skills to our tech stack (Python on the backend, Go and TypeScript preferred)You communicate complex technical concepts clearly and effectively, and you’re comfortable working directly with customersYou’re good at collaborating cross-functionally and translating customer feedback into actionable product and platform improvementsOur Culture🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together, and learning from each other every day.🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
No items found.
Apply
January 9, 2026
Member of Technical Staff - Data Quality Engineer (Pre-training)
Reflection
1-10
0
0
-
0
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleData is playing an increasingly crucial role at the frontier of AI innovation. Many of the most meaningful advances in recent years have come not from new architectures, but from better data.As a member of the Data Team, your mission is to ensure that the data used to train our models meets a high bar for quality, reliability, and downstream impact. You will directly shape how our models perform on critical capabilities.Working with world-class researchers on our pre-training teams, you’ll help turn fuzzy notions of “good data” into concrete, measurable standards that scale across large data campaigns. We’re looking for engineers who combine strong engineering fundamentals with a deep curiosity about data quality and its impact on model performance.Working closely with our pre-training teams you will:Own upstream data quality for LLM pre-training; as a specialist or generalist across languages and modalitiesPartner closely with research and pre-training teams to translate requirements into measurable quality signals, and provide actionable feedback to external data vendorsIn addition to human-in-the-loop processes, you will design, validate, and scale automated QA methods to reliably measure data quality across large campaignsBuild reusable QA pipelines that reliably deliver high-quality data to pre-training teams for model trainingMonitor and report on data quality over time, driving continuous iteration on quality standards, processes, and acceptance criteriaAbout YouStrong engineering fundamentals with experience building data pipelines, QA systems, or evaluation workflows for pre-training dataDetail-oriented with an analytical mindset, able to identify failure modes, inconsistencies, and subtle issues that affect data qualitySolid understanding of how data quality impacts pre-training, with the ability to translate quality concerns into concrete signals, decisions, and feedbackExperience designing and validating automated quality checks, including rule-based systems, statistical methods, or model-assisted approaches such as LLM-as-a-JudgeComfortable working autonomously, owning problems end-to-end, and collaborating effectively with researchers, engineers, and operations partnersSkills and QualificationsProficiency in Python and building ML / LLM workflows. Must be comfortable debugging and writing scalable codeExperience working with large datasets and automated evaluation or quality-checking systemsFamiliarity with how LLMs work and can describe how models are trained and evaluatedExcellent communication skills with the ability to clearly articulate complex technical concepts across teamsWhat We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Apply
January 8, 2026
Member of Technical Staff - Data Quality Engineer (Post-training)
Reflection
1-10
0
0
-
0
United States
Full-time
Remote
false
Our MissionReflection’s mission is to build open superintelligence and make it accessible to all.We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.About the RoleData is playing an increasingly crucial role at the frontier of AI innovation. Many of the most meaningful advances in recent years have come not from new architectures, but from better data.As a member of the Data Team, your mission is to ensure that the data used to train and evaluate our models meets a high bar for quality, reliability, and downstream impact. You will directly shape how our models perform on critical capabilities — agentic tool use, long-horizon reasoning and robust safety alignment.Working with world-class researchers on our post-training teams, you’ll help turn fuzzy notions of “good data” into concrete, measurable standards that scale across large data campaigns. We’re looking for engineers who combine strong engineering fundamentals with a deep curiosity about data quality and its impact on model behaviorWorking closely with our post-training teams you will:Own upstream data quality for LLM post-training and evaluation by analyzing expert-developed datasets and operationalizing quality standards for reasoning, alignment, and agentic use casesPartner closely with research and post-training teams to translate requirements into measurable quality signals, and provide actionable feedback to external data vendorsDesign, validate, and scale automated QA methods, including LLM-as-a-Judge frameworks, to reliably measure data quality across large campaignsBuild reusable QA pipelines that reliably deliver high-quality data to post-training teams for model training and evaluationMonitor and report on data quality over time, driving continuous iteration on quality standards, processes, and acceptance criteriaAbout YouStrong engineering fundamentals with experience building data pipelines, QA systems, or evaluation workflows for post-training data and agentic environmentsDetail-oriented with an analytical mindset, able to identify failure modes, inconsistencies, and subtle issues that affect data qualitySolid understanding of how data quality impacts training (SFT and RL) and evaluation, with the ability to translate quality concerns into concrete signals, decisions, and feedbackExperience designing and validating automated quality checks, including rule-based systems, statistical methods, or model-assisted approaches such as LLM-as-a-JudgeComfortable working autonomously, owning problems end-to-end, and collaborating effectively with researchers, engineers, and operations partnersSkills and QualificationsProficiency in Python and building ML / LLM workflows. Must be comfortable debugging and writing scalable codeExperience working with large datasets and automated evaluation or quality-checking systemsFamiliarity with how LLMs work and can describe how models are trained and evaluatedExcellent communication skills with the ability to clearly articulate complex technical concepts across teamsWhat We Offer:We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time. Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
No items found.
Apply
January 8, 2026
No job found
Your search did not match any job. Please try again
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.