Find AI Work That Works for You
Latest roles in AI and machine learning, reviewed by real humans for quality and clarity.
I'm strong in:
Edit filters
New AI Opportunities
Showing 61 – 79 of 79 jobs
Tag
RVP, Strategic Accounts
Observe
201-500
United States
Full-time
Remote
true
About Us Observe.AI enables enterprises to transform how they connect with customers - through AI agents and copilots that engage, assist, and act across every channel. From automating conversations to guiding human agents in real time to uncovering insights that shape strategy, Observe.AI turns every interaction into a driver of loyalty and growth. Trusted by global leaders, we’re creating a future where every customer experience is smarter, faster, and more impactful. Why Join Us This is a rare chance to build and lead the Strategic Accounts business for one of the fastest-growing innovators in the AI-powered customer experience space. We are seeking a dynamic and proven sales leader to architect and scale our Strategic Accounts business from the ground up. The RVP, Strategic Accounts will be at the forefront of driving new logo growth in greenfield markets, engaging C-level executives at some of the world’s largest and most innovative enterprises. You will design and lead a high-performing enterprise sales organization, build repeatable sales processes, and personally lead high-value, complex sales cycles. This is both a strategic leadership and hands-on execution role, ideal for a player-coach who thrives in high-growth environments, can translate GenAI innovation into measurable customer outcomes, and is passionate about shaping the future of customer experience. This role reports directly to the VP, Strategic Accounts. What you’ll be doing Hunter Mentality & New Logo Growth Identify and open doors into greenfield strategic accounts—from Global 5000 to digitally transforming enterprises. 500+ agents Lead with value, tailoring solutions to strategic customer pain points and use cases across the contact center and customer experience landscape. Build trusted advisor relationships with C-level executives, influencing enterprise innovation strategies through AI/GenAI-powered capabilities. Build from Scratch Architect the global Strategic Accounts org from the ground up—define structure, headcount, roles, territories, KPIs, and compensation. Recruit, onboard, and mentor a high-performing team of Strategic Account Executives, supported by Solutions Engineers and SDRs. Stand up sales processes and infrastructure to enable rapid, repeatable scaling (CRM workflows, enablement, playbooks, forecasting, etc.). Enterprise Sales Execution Own the full-cycle sales process for strategic accounts—from prospecting to C-level engagement, solution selling, and complex negotiations. Partner with Product, Marketing, and Customer Success to influence roadmap and ensure alignment with enterprise customer needs. Consistently meet or exceed new ARR targets, pipeline coverage goals, and win rates. Player-Coach Execution Personally lead high-value, complex enterprise sales cycles—owning prospecting, stakeholder engagement, solution design, and negotiation. Operate as a true player-coach: setting the pace, modeling best-in-class sales behaviors, and actively closing business while building the team. Drive predictable pipeline generation and pipeline hygiene through consistent coaching and inspection. GenAI Fluency & Value Selling Be a credible executive voice in AI/GenAI, particularly in contact center and CX innovation. Translate complex AI capabilities into business outcomes and customer value stories. Guide strategic clients through their AI transformation journeys and innovation roadmaps. What you bring to the role 12+ years of enterprise SaaS sales experience, with 5+ years in a senior leadership role focused on Strategic or Global Accounts. Proven hunter with a track record of landing new logos, especially in whitespace markets or new geographies. Experience building and scaling a new function or region from scratch—team, process, and GTM. Strong understanding of contact center, CX, or AI-powered technologies; able to lead technical discussions with CxOs and IT leaders. Familiar with enterprise sales motions such as MEDDIC, Challenger, or Customer Centric Sales. Comfortable with ambiguity, high-growth, and iterative experimentation—startup/scale-up experience a plus. Strategic, data-driven, and resilient with a relentless drive to win. Preferred Background Deep familiarity with GenAI, NLP, LLMs, or ML applications in enterprise use cases. Global or multi-regional sales leadership experience (North America, EMEA, or APAC). Background in building high-performing sales teams at companies in post-Series B to pre-IPO stages. Bachelor's or Master’s degree in Business, Engineering, or a related technical discipline. Perks and Benefits Competitive compensation including equity Excellent medical, dental, and vision insurance options Flexible time off 10 Company holidays + Winter Break and up to 16-weeks of parental leave 401K plan Quarterly Lifestyle Spend Monthly Mobile + Internet Stipend Pre-tax Commuter Benefits Salary Range The OTE (On Target Earnings) compensation range targeted for this full-time position is $420,000 OTE (On Target Earnings) per annum. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives and equity (in the form of options). This salary range is an estimate, and the actual salary may vary based on the Company’s compensation practices. Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai.
Enterprise Sales
Marketing & Sales
2025-09-13 12:26
Applied AI Researcher
Yupp
11-50
United States
Full-time
Remote
false
About YuppWe are a well-funded, rapidly growing, early-stage AI startup headquartered in Silicon Valley that is building a two-sided product -- one side meant for global consumers and the other side for AI builders and researchers. We work on the cutting edge of AI across the stack. Check out our product that was launched recently, and how it solves the foundational challenge of robust and trustworthy AI model evaluations. Here's more information about us.Why Join Yupp?Are you ready to have the ride of a lifetime together with some of the smartest and most seasoned colleagues? You’ll work on challenging, large-scale problems at the cutting edge of AI to build novel products that touch millions of users globally, in a massive and growing market opportunity.Yupp’s founding team is highly experienced and comes from companies like Twitter, Google, Coinbase, Microsoft and Paypal. This team is one of the smartest, most fun, cracked top talent you will ever work with. Our work culture provides a high degree of autonomy, ownership and impact. It’s intense and isn’t for everyone. But if you want to build the future of AI alongside others who are at the top of their game and expect the same from you, there’s no better AI startup to be.At Yupp, you will experience both the excitement of building for a large scale global user base as well as for the deeply technical audience of AI model builders and researchers. You’ll get immersed in and learn all about the latest and greatest AI models and agents. You’ll interact with AI builders and researchers from other AI labs all around the world.We are a mostly in-person startup, but we are also flexible – you can usually work from home when you need to and come in and leave when you want to. Many employees work from home on average 1 day a week.ResponsibilitiesResearch and track emerging trends in GenAI and LLM advancements to identify potential applications within the company.Design, build, and maintain LLM-based applications and solutions.Optionally manage the full ML lifecycle, including data analysis, preprocessing, model architecture, training, evaluation, and MLOps.Collaborate with product engineers, designers, and data scientists to define and deliver cutting-edge AI solutions.Convey complex technical information clearly to audiences with varying levels of AI expertise.Troubleshoot and debug AI applications to resolve performance and accuracy issues.Write clean, maintainable, and well-documented research and optionally production code.QualificationsBachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field (Ph.D. or equivalent experience is a plus).Minimum of 3 years of experience in machine learning, with a track record of deploying AI models in real-world applications.Strong background in modern LLM architectures and applications, and experience in using GenAI approaches in an applied, production environment.Strong programming skills in Python and familiarity with libraries such as PyTorch, TensorFlow, NumPy, and JAX.Deep understanding of machine learning algorithms, data structures, and model evaluation methods.Excellent communication and presentation skills, with the ability to explain design decisions to both experts and non-experts.Strong analytical skills and ability to work independently and collaboratively in fast-paced environments.Preferred QualificationsAuthored or co-authored research papers in reputable AI/ML conferences or impactful technical blog posts.Active participation in open-source AI/ML projects, Kaggle competitions, or similar initiatives.Experience working in startup or small, fast-paced environments.
Machine Learning Engineer
Data Science & Analytics
Research Scientist
Product & Operations
2025-09-13 12:26
Speech Analyst - II
Observe
201-500
India
Remote
false
About Us Observe.AI enables enterprises to transform how they connect with customers - through AI agents and copilots that engage, assist, and act across every channel. From automating conversations to guiding human agents in real time to uncovering insights that shape strategy, Observe.AI turns every interaction into a driver of loyalty and growth. Trusted by global leaders, we’re creating a future where every customer experience is smarter, faster, and more impactful. Why Join Us At Observe.AI, speech analysis isn’t just about understanding words—it’s about uncovering insights that shape customer experiences and advance AI voice technologies. As a Speech Analyst, you’ll work with spoken language data to identify patterns, improve speech-based applications, and drive impactful business outcomes. You’ll partner closely with customers and cross-functional teams to configure and automate quality assurance, build speech-based queries, and transform evaluation forms into actionable insights. By applying speech analytics to uncover trends, resolve accuracy issues, and improve business processes, you’ll help organizations achieve measurable ROI and scale their QA programs far beyond traditional limits. If you’re looking for an opportunity where your expertise turns raw conversations into business impact, your ideas shape the future of customer experience, and your growth is supported by solving meaningful challenges with a talented team, this is the place for you. What you’ll be doing Configure and automate QA by transforming customer evaluation forms into trackable insights within the Observe.AI dashboard. Automate scoring of evaluation forms using speech analytics tools. Audit and resolve accuracy issues in voice analytics data to reduce false readings. Provide thought leadership by refreshing and optimizing speech analytics configurations. Analyze customer business processes & KPIs to identify areas for improvement. Leverage interaction analytics to improve business outcomes and enhance operational efficiency. Collaborate with customer leaders to plan and execute strategic analytical initiatives. Build speech-based queries to track key business metrics and drive measurable ROI. Deliver automated QA processes at scale, enabling customers to analyze up to 100x more evaluations. Create playbooks and impactful stories showcasing business transformation through voice analytics. What you bring to the role 4-8 years of experience working with speech analytic tools to identify key points of interests within contact center interactions Understanding of call center KPIs such as AHT, CSAT, FCR, Upsell,Call Driver, NPS etc Analyze voice and chat interactions to identify trends, pain points, and opportunities across customer touchpoints Excellent verbal and written communication skills, effectively engaging with customers across various scenarios, including inquiries, complaints, and complex support situations Ability to work independently and team player in a technology driven environment Bachelor’s or Master’s degree in Linguistics, Data Science, Computer Science, Cognitive Science, or a related field Compensation, Benefits and Perks Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai.
Data Analyst
Data Science & Analytics
2025-09-13 12:26
Staff+ AI Engineer
Yupp
11-50
United States
Full-time
Remote
false
About YuppWe are a well-funded, rapidly growing, early-stage AI startup headquartered in Silicon Valley that is building a two-sided product -- one side meant for global consumers and the other side for AI builders and researchers. We work on the cutting edge of AI across the stack. Check out our product that was launched recently, and how it solves the foundational challenge of robust and trustworthy AI model evaluations. Here's more information about us.Why Join Yupp?Are you ready to have the ride of a lifetime together with some of the smartest and most seasoned colleagues? You’ll work on challenging, large-scale problems at the cutting edge of AI to build novel products that touch millions of users globally, in a massive and growing market opportunity.Yupp’s founding team is highly experienced and comes from companies like Twitter, Google, Coinbase, Microsoft and Paypal. This team is one of the smartest, most fun, cracked top talent you will ever work with. Our work culture provides a high degree of autonomy, ownership and impact. It’s intense and isn’t for everyone. But if you want to build the future of AI alongside others who are at the top of their game and expect the same from you, there’s no better AI startup to be.At Yupp, you will experience both the excitement of building for a large scale global user base as well as for the deeply technical audience of AI model builders and researchers. You’ll get immersed in and learn all about the latest and greatest AI models and agents. You’ll interact with AI builders and researchers from other AI labs all around the world.We are a mostly in-person startup, but we are also flexible – you can usually work from home when you need to and come in and leave when you want to. Many employees work from home on average 1 day a week.ResponsibilitiesStay up to date on emerging trends in GenAI and LLM advancements, identifying opportunities for application within the company.Design, build, and maintain LLM applications that meet high performance and reliability standards.Own the full ML lifecycle, including data analysis, preprocessing, model architecture, training, evaluation, and MLOps.Collaborate with product engineers, designers, and data scientists to develop cutting-edge AI solutions.Communicate complex technical concepts clearly to both AI experts and non-technical audiences.Troubleshoot, debug, and optimize AI models for scalability and efficiency.Write clean, maintainable, and well-documented production code.QualificationsBachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field (Ph.D. or equivalent experience is a plus).Minimum of 10 years of experience in machine learning, with proven success deploying AI models in real-world applications.Strong programming skills in Python, with familiarity in libraries such as PyTorch, TensorFlow, NumPy, and JAX.Deep understanding of machine learning algorithms, data structures, and model evaluation methodologies.Strong background in modern LLM architectures and applications, and experience in using GenAI approaches in an applied, production environment.Excellent communication and presentation skills, with the ability to clearly explain design decisions.Strong analytical and problem-solving skills, able to work independently and collaboratively in a fast-paced environment.Preferred QualificationsAuthored or co-authored research papers in reputable AI/ML conferences or impactful technical blog posts.Active participation in open-source AI/ML repositories, Kaggle competitions, or similar projects.Experience working in startup or small, fast-paced environments.
Machine Learning Engineer
Data Science & Analytics
2025-09-13 12:12
Software Development Engineer III - Backend (Python)
Observe
201-500
India
Full-time
Remote
false
About Us Observe.AI enables enterprises to transform how they connect with customers - through AI agents and copilots that engage, assist, and act across every channel. From automating conversations to guiding human agents in real time to uncovering insights that shape strategy, Observe.AI turns every interaction into a driver of loyalty and growth. Trusted by global leaders, we’re creating a future where every customer experience is smarter, faster, and more impactful. Why Join Us We are seeking a Software Development Engineer III (SDE 3) at Observe AI. In this role, we are looking for a talented and driven Sr. Software Engineer to build a scalable, secure, and multi-tenant cloud platform for processing millions of call recordings every day. You will use state-of-the-art cloud computing technologies. You will work closely with the leaders and the engineers to come up with a roadmap and develop solutions. What you’ll be doing Work with the team to define the technical stack and own it Work with the product team to understand the product roadmap and define the technical roadmap Participate in the entire application lifecycle, focusing on coding and debugging Integrate user-facing elements developed by front-end developers with server-side logic Build reusable code and libraries for future use Optimize the application for maximum speed and scalability Implement security and data protection Design and implement data storage solutions Build and scale data pipeline Collaborate with Front-end developers to integrate user-facing elements with server-side logic What you'll bring to the role Bachelor’s Degree in Computer Science with 6.5-9 years of experience in building large-scale products Expertise in Python Knowledge of container management tools (docker-swarm, Kubernetes) is a plus An ability to perform well in a fast-paced environment Good knowledge in at least one of the SQL or no-SQL databases: Postgres, MongoDB, Cassandra, Redis Good knowledge in queue (rabbitmq, Kafka, etc), cache(ehcache, Memcache) Strong knowledge of design patterns Perks and Benefits Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai.
Software Engineer
Software Engineering
2025-09-13 12:12
Senior Implementation Manager
Observe
201-500
United States
Full-time
Remote
true
About Us Observe.AI enables enterprises to transform how they connect with customers - through AI agents and copilots that engage, assist, and act across every channel. From automating conversations to guiding human agents in real time to uncovering insights that shape strategy, Observe.AI turns every interaction into a driver of loyalty and growth. Trusted by global leaders, we’re creating a future where every customer experience is smarter, faster, and more impactful. Why Join Us The future of customer engagement is AI-driven, with conversational automation representing a $300B market opportunity. While traditional NLP/NLU-powered solutions have fallen short, Observe.AI is leading the revolution with state-of-the-art LLM-powered Conversational AI technology. Backed by 7 years of experience and insights from 300+ customers, we’re perfectly positioned to disrupt the space and unlock massive value for enterprises worldwide. We are seeking an Implementation Manager who will be responsible for project managing, facilitating stakeholder design workshops, building out customer use-cases discovered during onboarding, and delivering virtual/onsite software training to trainer and end-user audiences. The implementation manager will work with new customers to stand up their Observe.AI programs and existing customers to expand their capabilities with the Observe.AI product suite. This role will need to quickly learn and adapt the customer's business objectives into a successful onboarding process from sales handoff to CSM transition. The ideal candidate applies their SaaS experience to driving implementation projects to completion, engaging key stakeholders to build a strong program foundation, and communicating technical and functional concepts in a clear concise manner. As a key member of our core team, you’ll play a pivotal role in implementing and launching cutting-edge Conversational AI products. You’ll work at the forefront of AI innovation, helping to build solutions that will transform how global enterprises adopt and scale AI—delivering real-time customer assistance, smarter automation, and seamless interactions. This is your chance to join an industry leader as we drive the future of Conversational AI and empower enterprises to reach new levels of efficiency and customer satisfaction. What you’ll be doing Lead a customer program team through a multi-phase implementation involving business discovery, technical setup, and user training Guide and enable customers on their contact center AI journey with our software Deliver tailored customer training sessions across a variety of formats (e.g. live, recorded, virtual, in-person, train-the-trainer, end-user) Manage client expectations, project timelines, documentation deliverables, and team resources Partner with CSMs to create value-based adoption goals that set the Customer up for a successful launch Communicate project status, issues, and risks while escalating effectively as appropriate Manage multiple customer projects simultaneously Collaborate cross-functionally with Sales, Product, Marketing, and Customer Success to improve services delivery and customer experience What you'll bring to the role 5+ years of work experience implementing SaaS software Comfort in user-centered communications and facilitating technical problem solving Basic understanding of speech analytics and quality management software and processes Experience in a customer-facing role with clear deliverables and stakeholder management Proficiency in coaching and facilitation skills Training experience is highly preferred Demonstrable analytical, problem-solving, and time management skills Experience in the Software & Platform industry Experience managing complex customer implementations Perks & Benefits Competitive compensation including equity Excellent medical, dental, and vision insurance options Flexible time off 10 Company holidays + Winter Break and up to 16-weeks of parental leave 401K plan Quarterly Lifestyle Spend Monthly Mobile + Internet Stipend Pre-tax Commuter Benefits Salary Range The base salary compensation range targeted for this full-time position is $100,000 - $135,000 per annum. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives and equity (in the form of options). This salary range is an estimate, and the actual salary may vary based on the Company’s compensation practices. Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply. For more information, visit www.observe.ai. #LI-REMOTE
Implementation Lead
Software Engineering
Project Manager
Product & Operations
2025-09-13 12:12
Detection and Response Engineer
Cerebras Systems
501-1000
India
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking an exceptional Detection and Response Engineer to serve on the front lines, where you will build systems to detect threats, investigate incidents, and lead coordinated response across teams. The right candidate brings hands-on experience creating reliable detections, automating repetitive tasks, and turning investigation findings into durable improvements to our security program, with an interest in exploring AI-driven automation. Responsibilities Create and optimize detections, playbooks, and workflows to quickly identify and respond to potential incidents. Investigate security events and participate in incident response, including on-call responsibilities. Automate investigation and response workflows to reduce time to detect and remediate incidents. Build and maintain detection and response capabilities as code, applying modern software engineering rigor. Explore and apply emerging approaches, potentially leveraging AI, to strengthen our security posture. Document investigation and response procedures as clear runbooks for triage, escalation, and containment. Skills And Qualifications 3–5 years of experience in detection engineering, incident response, or security engineering. Strong proficiency in Python and query languages such as SQL, with the ability to write clean, maintainable, and testable code. Practical knowledge of detection and response across cloud, identity, and endpoint environments. Familiarity with attacker behaviors and the ability to translate them into durable detection logic. Strong fundamentals in operating systems, networking, and log analysis. Excellent written communication skills, with the ability to create clear documentation. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
2025-09-13 12:12
Research Staff, LLMs
Deepgram
201-500
United States
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramThe OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The RoleDeepgram is currently looking for an experienced researcher to who has worked extensively with Large Language Models (LLMS) and has a deep understanding of transformer architecture to join our Research Staff. As a Member of the Research Staff, this individual should have extensive experience working on the hard technical aspects of LLMs, such as data curation, distributed large-scale training, optimization of transformer architecture, and Reinforcement Learning (RL) training.The ChallengeWe are seeking researchers who:See "unsolved" problems as opportunities to pioneer entirely new approachesCan identify the one critical experiment that will validate or kill an idea in days, not monthsHave the vision to scale successful proofs-of-concept 100xAre obsessed with using AI to automate and amplify your own impactIf you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.What You'll DoBrainstorming and collaborating with other members of the Research Staff to define new LLM research initiativesBroad surveying of literature, evaluating, classifying, and distilling current methodsDesigning and carrying out experimental programs for LLMsDriving transformer (LLM) training jobs successfully on distributed compute infrastructure and deploying new models into productionDocumenting and presenting results and complex technical concepts clearly for a target audienceStaying up to date with the latest advances in deep learning and LLMs, with a particular eye towards their implications and applications within our productsYou'll Love This Role if YouAre passionate about AI and excited about working on state of the art LLM researchHave an interest in producing and applying new science to help us develop and deploy large language modelsEnjoy building from the ground up and love to create new systems.Have strong communication skills and are able to translate complex concepts clearlyAre highly analytical and enjoy delving into detailed analyses when necessary
It's Important to Us That You Have3+ years of experience in applied deep learning research, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanismProven experience working with large language models (LLMs) - including experience with data curation, distributed large-scale training, optimization of transformer architecture, and RL LearningStrong experience coding in Python and working with PytorchExperience with various transformer architectures (auto-regressive, sequence-to-sequence.etc)Experience with distributed computing and large-scale data processingPrior experience in conducting experimental programs and using results to optimize modelsIt Would Be Great if You HadDeep understanding of transformers, causal LMs, and their underlying architectureUnderstanding of distributed training and distributed inference schemes for LLMsFamiliarity with RLHF labeling and training pipelinesUp-to-date knowledge of recent LLM techniques and developmentsPublished papers in Deep Learning Research, particularly related to LLMs and deep neural networksBacked by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Research Scientist
Product & Operations
Machine Learning Engineer
Data Science & Analytics
2025-09-13 12:12
Detection and Response Engineer
Cerebras Systems
501-1000
Canada
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. About The Role We are seeking an exceptional Detection and Response Engineer to serve on the front lines, where you will build systems to detect threats, investigate incidents, and lead coordinated response across teams. The right candidate brings hands-on experience creating reliable detections, automating repetitive tasks, and turning investigation findings into durable improvements to our security program, with an interest in exploring AI-driven automation. Responsibilities Create and optimize detections, playbooks, and workflows to quickly identify and respond to potential incidents. Investigate security events and participate in incident response, including on-call responsibilities. Automate investigation and response workflows to reduce time to detect and remediate incidents. Build and maintain detection and response capabilities as code, applying modern software engineering rigor. Explore and apply emerging approaches, potentially leveraging AI, to strengthen our security posture. Document investigation and response procedures as clear runbooks for triage, escalation, and containment. Skills And Qualifications 3–5 years of experience in detection engineering, incident response, or security engineering. Strong proficiency in Python and query languages such as SQL, with the ability to write clean, maintainable, and testable code. Practical knowledge of detection and response across cloud, identity, and endpoint environments. Familiarity with attacker behaviors and the ability to translate them into durable detection logic. Strong fundamentals in operating systems, networking, and log analysis. Excellent written communication skills, with the ability to create clear documentation. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
DevOps Engineer
Data Science & Analytics
2025-09-13 12:11
Performance Engineer
Cerebras Systems
501-1000
Canada
Full-time
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About The Role Join Cerebras as a Performance Engineer within our innovative Runtime Team. Our groundbreaking CS-3system, hosted by a distributed set of modern and powerful x86 machines, has set new benchmarks in high-performance ML training and inference solutions. It leverages a dinner-plate sized chip with 44GB of on-chip memory to surpass traditional hardware capabilities. This role will challenge and expand your expertise in optimizing AI applications and managing computational workloads primarily on the x86 architecture that run our Runtime driver. Responsibilities Focus on CPU and memory subsystem optimizations for our Runtime software driver, enabling faster key cloud and ML training/inference workloads across modern x86 machines that form the backbone of our AI accelerator. Develop and enhance algorithms for efficient data movement, local data processing, job submission, and synchronization between various software and hardware components. Optimize our workloads using advanced CPU features like AVX instructions, prefetch mechanisms, and cache optimization techniques. Perform performance profiling and characterization using tools such as AMD uprof, and reduce OS level overheads. Influence the design of Cerebras' next-generation AI architectures and software stack by analyzing the integration of advanced CPU features and their impact on system performance and computational efficiency. Engage directly with the AI and ML developer community to understand their needs and solve contemporary challenges with innovative solutions. Collaborate with multiple teams within Cerebras, including architecture, research, and product management, to elevate our computational platform and influence future designs. Skills & Qualifications BS, MS, or PhD in Computer Science, Computer Engineering, or a related field. 5+ years of relevant experience in performance engineering, particularly in optimizing algorithms and software design. Strong proficiency in C/C++ and familiarity with Python or other scripting languages. Demonstrated experience with memory subsystem optimizations and system-level performance tuning. Experience with distributed systems is highly desirable, as it is crucial to optimizing the performance of our Runtime software across multiple x86 hosts. Familiarity with compiler technologies (e.g., LLVM, MLIR) and with PyTorch and other ML frameworks. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Software Engineer
Software Engineering
Machine Learning Engineer
Data Science & Analytics
DevOps Engineer
Data Science & Analytics
2025-09-13 12:11
AI Engineer - FDE (Forward Deployed Engineer)
Databricks
5000+
France
Full-time
Remote
true
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Req ID: CSQ326R220 Recruiter: Dina Hussain Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. We welcome remote applicants located near our offices. The preferred locations (in priority order) are London (UK), Madrid (Spain), Paris (France), and Amsterdam (NL). Reporting to: Senior Manager - AI FDE, EMEA The impact you will have: Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for: Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets We require fluency in English and welcome candidates who also speak French, Spanish, Dutch, or German About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
DevOps Engineer
Data Science & Analytics
2025-09-13 12:11
AI Engineer - FDE (Forward Deployed Engineer)
Databricks
5000+
United States
Full-time
Remote
true
CSQ426R189 The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote. The impact you will have: Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI Research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for: Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools (i.e., pandas, scikit-learn, PyTorch, etc.) Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.
Zone 1 Pay Range$161,280—$225,792 USDZone 2 Pay Range$161,280—$225,792 USDZone 3 Pay Range$161,280—$225,792 USDZone 4 Pay Range$161,280—$225,792 USDAbout Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Machine Learning Engineer
Data Science & Analytics
2025-09-13 12:11
Solutions Architect (Applied Engineering) - APAC
Deepgram
201-500
Anywhere
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramNote: this role is based out of the APAC territory.The OpportunityDeepgram is expanding across APAC and operating a true follow-the-sun model. We are hiring a Solutions Architect in APAC to own complex post-sales engagements, guide customers to production, and strengthen technical support coverage for the region. You will work across the full customer lifecycle with a primary focus on architecture, implementation, and Tier-3 technical problem solving. You will also contribute to pre-sales validation when deep technical credibility is needed, partner with Product and Engineering on feedback and roadmap, and help mature our global operations.About Applied Engineering at DeepgramApplied Engineering combines what many companies split across Solutions Engineering, Solutions Architecture, Consulting, and Senior Technical Support. We are the technical interface from first discovery through successful production and beyond. The team partners closely with Sales, Customer Success, Product, and DevRel to deliver an enterprise-grade experience that is fast, clear, and outcome oriented.What You Will DoLead architecture and implementation for enterprise deployments that use Deepgram speech-to-text (STT), text-to-speech (TTS), and voice agent (VA) capabilities.Own post-sales technical success for assigned accounts. Unblock integrations, optimize accuracy and latency, and guide customers to stable production.Serve as the Tier-3 escalation point for complex issues. Drive root cause analysis, propose mitigations, and write durable fixes or automation when possible.Design reference architectures and implementation patterns for common APAC customer use cases including contact centers, voice analytics, QA, and AI agents.Build high-quality samples, internal tools, or scripts that make repeatable tasks easier for customers and for our team.Contribute to pre-sales discovery and POCs when deep technical direction is needed. Translate business goals to a concrete plan with clear success criteria.Partner with Product and Engineering to prioritize fixes and features. Bring crisp customer signals that improve our roadmap.Contribute to knowledge systems. Capture runbooks, playbooks, and docs that scale your expertise across regions and languages.Participate in an on-call or incident rotation appropriate for the region and customer support tier. Help us meet or outperform regional SLAs.Regional ExpectationsBased in an APAC time zone with reliable overlap to customer business hours across East Asia, Southeast Asia, and ANZ.Excellent written and spoken English. Additional Asian languages are a strong plus, especially Japanese, Korean, Mandarin, or Hindi.Familiarity with data privacy and enterprise security reviews common across APAC, for example Singapore PDPA, Japan APPI, Korea PIPA, Australia’s Privacy Act, and India’s DPDP Act.Ability to travel within the region for critical customer milestones, estimated 10 to 20 percent.Your First 90 DaysFirst 30 daysRamp on Deepgram’s platform, APIs, SDKs, self-hosted deployments, and support practices. Shadow calls across pre-sales, post-sales, and support. Begin owning low-risk technical tasks for an active account and contribute improvements to runbooks or sample code relevant to APAC environments.By day 60Own 2-3 customer workstreams end to end. Lead discovery for one new implementation or migration. Publish one reusable asset such as a deployment guide, Helm example, or troubleshooting playbook with APAC nuances like regional cloud endpoints and data residency.By day 90Act as primary technical owner for several APAC accounts in production. Close at least one complex investigation with measurable customer impact. Present a short readout on a repeated regional pattern and the automation or content you created to solve it.You Will Thrive Here If YouEnjoy solving hard customer problems at the code, container, and cloud layers.Like translating real-world constraints into clear architectures and plans.Are comfortable writing and reading production-grade code, not just sample snippets.Value crisp documentation, reproducible runbooks, and operational excellence.Care about delivering outcomes, not just closing tickets.Minimum Qualifications5 or more years in Solutions Architecture, Solutions Engineering, or similar customer-facing technical roles.Professional software engineering experience in at least one language such as JavaScript/TypeScript, Python or TypeScript. You can build POCs and tools that real users run.Hands-on experience with modern cloud platforms. Comfortable with containerization and orchestration, for example Docker and Kubernetes, and with secure network design for API-driven systems.Proven ownership of complex post-sales work such as production deployments, migrations, performance tuning, and incident response.Clear written and verbal communication across both technical and executive audiences.Preferred QualificationsExperience with speech recognition, TTS, or building voice agents that orchestrate LLMs.Familiarity with WER analysis, model selection and tuning, and accuracy-versus-latency tradeoffs.Experience operating self-hosted systems, including Helm-based deployments and observability for production services.Strong understanding of authentication, authorization, data residency, and compliance topics that appear in APAC enterprise reviews.Fluency in one additional Asian language such as Japanese, Korean, Mandarin, or Hindi.How We WorkRemote across APAC with periodic regional meetups.Collaboration with colleagues in EMEA and the Americas to support continuous coverage.Shared playbooks, defined escalation paths, and measurable SLAs.Interview ProcessRecruiter screen focused on experience, motivations, and logistics.Hiring manager conversation focused on technical depth and customer impact.Technical assessment that mirrors real work. Expect practical coding, architecture discussion, and debugging.Panel interviews with cross-functional partners.Short presentation on a customer-facing technical win you led. Explain the problem, your approach, and measurable outcomes.If this sounds like you, we would love to meet you!Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
Solutions Architect
Software Engineering
2025-09-13 12:03
AI Engineer - FDE (Forward Deployed Engineer)
Databricks
5000+
United Kingdom
Spain
France
Full-time
Remote
true
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Req ID: CSQ326R220 Recruiter: Dina Hussain Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. We welcome remote applicants located near our offices. The preferred locations (in priority order) are London (UK), Madrid (Spain), Paris (France), and Amsterdam (NL). Reporting to: Senior Manager - AI FDE, EMEA The impact you will have: Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for: Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets We require fluency in English and welcome candidates who also speak French, Spanish, Dutch, or German About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Machine Learning Engineer
Data Science & Analytics
Software Engineer
Software Engineering
2025-09-13 12:03
Product Manager - AI Cluster Management Software
Cerebras Systems
501-1000
United States
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About The Role As a Product Manager on the Cerebras AI Cluster Management software team, you will lead the roadmap and execution of an AI centric cluster management solution with special emphasis on Security and Observability. The Cerebras AI cluster management software is a strategic software product initiative intended to deliver Cerebras’s high performance AI benefits to on-premises customers and sovereign/neo clouds. Cerebras cluster management is intended to simplify deployment and maintenance for platform operators, making it easier to manage complex AI infrastructure at scale. In your role, you will define the strategy, roadmap, direction and requirements for Crebras’s AI cluster software security, governance, compliance, observability and troubleshooting flows. The management software will address the demanding needs of the most sophisticated datacenter operators and seamlessly integrate with the customer’s operational, security, compliance and governance flows. This role offers the opportunity to shape a critical product that combines hardware, software, and systems to power AI deployments efficiently and reliably in diverse environments. If you are passionate about solving complex infrastructure challenges and building solutions that scale, this is the role for you. Responsibilities Define and deliver a world-class cluster management experience with a focus on observability, management, monitoring and security. Collaborate with engineering to design reliable, scalable solutions and APIs tailored to cluster operator workflows. Develop a deep understanding of cluster operator needs through user and market research. Communicate product updates and roadmap progress clearly to internal and external stakeholders. Skills & Qualifications 5+ years of product management experience, preferably in infrastructure observability and security domains. Expert knowledge of security tools such as IAM, IDP, SIEM, Key management systems. Expert knowledge of observability solutions such as Prometheus, Grafana, log management systems, observability management systems. Familiarity with cluster orchestration tools and concepts (e.g., Kubernetes). Strong ability to think at the API and platform layers, designing solutions for operator workflows. Excellent communication and collaboration skills, with the ability to work effectively across diverse teams. Technical background (e.g., computer science, engineering) or the ability to engage deeply with engineering teams. Proven ability to excel in a fast-paced, dynamic environment. Preferred Skills & Qualifications Experience in enterprise software, cloud infrastructure, or AI/ML platforms. Understanding of security and authentication principles in software systems. Familiarity with monitoring, telemetry, and fault tolerance in distributed systems. Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Product Manager
Product & Operations
2025-09-13 12:03
Member of Technical Staff - GPU Infrastructure
Prime Intellect
11-50
United States
Full-time
Remote
false
Building the Future of Decentralized AI DevelopmentAt Prime Intellect, we're enabling the next generation of AI breakthroughs by helping our customers deploy and optimize massive GPU clusters. As our Solutions Architect for GPU Infrastructure, you'll be the technical expert who transforms customer requirements into production-ready systems capable of training the world's most advanced AI models.We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.Core Technical ResponsibilitiesThis customer-facing role combines deep technical expertise with hands-on implementation. You'll be instrumental in:Customer Architecture & DesignPartner with clients to understand workload requirements and design optimal GPU cluster architecturesCreate technical proposals and capacity planning for clusters ranging from 100 to 10,000+ GPUsDevelop deployment strategies for LLM training, inference, and HPC workloadsPresent architectural recommendations to technical and executive stakeholdersInfrastructure Deployment & OptimizationDeploy and configure orchestration systems including SLURM and Kubernetes for distributed workloadsImplement high-performance networking with InfiniBand, RoCE, and NVLink interconnectsOptimize GPU utilization, memory management, and inter-node communicationConfigure parallel filesystems (Lustre, BeeGFS, GPFS) for optimal I/O performanceTune system performance from kernel parameters to CUDA configurationsProduction Operations & SupportServe as primary technical escalation point for customer infrastructure issuesDiagnose and resolve complex problems across the full stack - hardware, drivers, networking, and softwareImplement monitoring, alerting, and automated remediation systemsProvide 24/7 on-call support for critical customer deploymentsCreate runbooks and documentation for customer operations teamsTechnical RequirementsRequired Experience3+ years hands-on experience with GPU clusters and HPC environmentsDeep expertise with SLURM and Kubernetes in production GPU settingsProven experience with InfiniBand configuration and troubleshootingStrong understanding of NVIDIA GPU architecture, CUDA ecosystem, and driver stackExperience with infrastructure automation tools (Ansible, Terraform)Proficiency in Python, Bash, and systems programmingTrack record of customer-facing technical leadershipInfrastructure SkillsNVIDIA driver installation and troubleshooting (CUDA, Fabric Manager, DCGM)Container runtime configuration for GPUs (Docker, Containerd, Enroot)Linux kernel tuning and performance optimizationNetwork topology design for AI workloadsPower and cooling requirements for high-density GPU deploymentsNice to HaveExperience with 1000+ GPU deploymentsNVIDIA DGX, HGX, or SuperPOD certificationDistributed training frameworks (PyTorch FSDP, DeepSpeed, Megatron-LM)ML framework optimization and profilingExperience with AMD MI300 or Intel Gaudi acceleratorsContributions to open-source HPC/AI infrastructure projectsGrowth OpportunityYou'll work directly with customers pushing the boundaries of AI, from startups training foundation models to enterprises deploying massive inference infrastructure. You'll collaborate with our world-class engineering team while having direct impact on systems powering the next generation of AI breakthroughs.We value expertise and customer obsession - if you're passionate about building reliable, high-performance GPU infrastructure and have a track record of successful large-scale deployments, we want to talk to you.Apply now and join us in our mission to democratize access to planetary scale computing.
DevOps Engineer
Data Science & Analytics
Solutions Architect
Software Engineering
2025-09-13 12:03
Platform Engineer – AI/ML Infrastructure
Deepgram
201-500
No items found.
Full-time
Remote
true
Company OverviewDeepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than DeepgramOpportunity:We're looking for an expert (Senior/Staff-level) Platform Engineer to build and operate the hybrid infrastructure foundation for our advanced AI/ML research and product development. You'll architect, build, and run the platform spanning AWS and our bare metal data centers, empowering our teams to train and deploy complex models at scale. This role is focused on creating a robust, self-service environment using Kubernetes, AWS, and Infrastructure-as-Code (Terraform), and orchestrating high-demand GPU workloads using schedulers like Slurm.What You’ll DoArchitect and maintain our core computing platform using Kubernetes on AWS and on-premise, providing a stable, scalable environment for all applications and services.Develop and manage our entire infrastructure using Infrastructure-as-Code (IaC) principles with Terraform, ensuring our environments are reproducible, versioned, and automated.Design, build, and optimize our AI/ML job scheduling and orchestration systems, integrating Slurm with our Kubernetes clusters to efficiently manage GPU resources.Provision, manage, and maintain our on-premise bare metal server infrastructure for high-performance GPU computing.Implement and manage the platform's networking (CNI, service mesh) and storage (CSI, S3) solutions to support high-throughput, low-latency workloads across hybrid environments.Develop a comprehensive observability stack (monitoring, logging, tracing) to ensure platform health, and create automation for operational tasks, incident response, and performance tuning.Collaborate with AI researchers and ML engineers to understand their infrastructure needs and build the tools and workflows that accelerate their development cycle.Automate the life cycle of single-tenant, managed deploymentsYou’ll Love This Role If YouAre passionate about building platforms that empower developers and researchers.Enjoy creating elegant, automated solutions for complex infrastructure challenges in both cloud and data center environments.Thrive on optimizing hybrid infrastructure for performance, cost, and reliability.Are excited to work at the intersection of modern platform engineering and cutting-edge AI.Love to treat infrastructure as a product, continuously improving the developer experience.It’s Important To Us That You Have5+ years of experience in Platform Engineering, DevOps, or Site Reliability Engineering (SRE).Proven, hands-on experience building and managing production infrastructure with Terraform.Expert-level knowledge of Kubernetes architecture and operations in a large-scale environment.Experience with high-performance compute (HPC) job schedulers, specifically Slurm, for managing GPU-intensive AI workloads.Experience managing bare metal infrastructure, including server provisioning (e.g., PXE boot, MAAS), configuration, and lifecycle management.Strong scripting and automation skills (e.g., Python, Go, Bash). It Would Be Great if You Had Experience with CI/CD systems (e.g., GitLab CI, Jenkins, ArgoCD) and building developer tooling.Familiarity with FinOps principles and cloud cost optimization strategies.Knowledge of Kubernetes networking (e.g., Calico, Cilium) and storage (e.g., Ceph, Rook) solutions.Experience in a multi-region or hybrid cloud environment.Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.We are happy to provide accommodations for applicants who need them.
DevOps Engineer
Data Science & Analytics
Software Engineer
Software Engineering
2025-09-13 12:02
Big Data Architect
Databricks
5000+
Germany
Remote
false
CSQ426R218 We have 5 open positions based in our Germany offices. As a Big Data Solutions Architect (Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks Data Intelligence Platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data. RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead. The impact you will have: You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to's and productionalizing customer use cases Work with engagement managers to scope variety of professional services work with input from the customer Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications Consult on architecture and design; bootstrap or implement customer projects which leads to a customers' successful understanding, evaluation and adoption of Databricks. Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer's needs. Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues. What we look for: Proficient in data engineering, data platforms, and analytics with a strong track record of successful projects and in-depth knowledge of industry best practices Comfortable writing code in either Python or Scala Enterprise Data Warehousing experience (Teradata / Synapse/ Snowflake or SAP) Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals Familiarity with CI/CD for production deployments Working knowledge of MLOps Design and deployment of performant end-to-end data architectures Experience with technical project delivery - managing scope and timelines. Documentation and white-boarding skills. Experience working with clients and managing conflicts. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Travel is required up to 10%, more at peak times. Databricks Certification About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Data Engineer
Data Science & Analytics
Solutions Architect
Software Engineering
2025-09-13 12:02
QA Tech Lead, Web Consoles - Inference Service
Cerebras Systems
501-1000
Canada
Remote
false
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.About The Role We are seeking an experienced QA Tech Lead – Web Consoles to join the Cerebras AI Inference Service organization. The individual needs to technically lead a team, facilitate adaptations of new methodology and proven experiences into the role, is capable of hands-on work whenever needed, and has excellent teamwork and problem-solving abilities. The ideal candidate is a leader who can drive growth, quality and ownership to thrive in a rapidly changing environment. Responsibilities Lead functional testing of all customer‑facing components, coordinating closely with development and automation teams. Participate in architectural and test‑strategy discussions to identify and close quality gaps. Perform UI/UX and web‑application testing across browsers, devices, and environments. Develop and maintain automation frameworks (Python/pytest) and front‑end/back‑end test suites using Playwright, Cypress etc. Create comprehensive test strategies aligned with product goals, technical architecture, and customer experience. Establish standards for test planning, execution, and documentation; mentor QA engineers and champion best practices. Build tooling and CI/CD integrations (GitHub Actions, Jenkins, CircleCI) to enable faster feedback cycles and scalable test coverage. Lead cross‑functional collaboration with engineers, product managers, UX, and design to embed quality from ideation through release. Define and drive key quality metrics; triage defect patterns and deliver actionable insights through dashboards and reports. Proactively identify future testing needs across cloud, edge, and device interfaces and shape the long‑term QA roadmap. Collaborate with SWEs and QAEs across US, Canada, and India to establish clear and testable requirements and acceptance criteria. Skills And Qualifications Proven track record of leading from the front, defining quality metrics, driving dashboards/reports, and influencing product quality decisions. Passion for user experience, accessibility, and long‑term product health. Excellent communication skills emphasizing collaboration, documentation, and continuous improvement. 5+ years of experience building QA systems for cloud-based SaaS and/or API services, preferably of AI systems. Ability to build and maintain test tooling and infrastructure that supports continuous integration and scalable coverage. Experience mentoring junior QA engineers and establishing best‑practice test processes across teams. Proficiency with Python automation frameworks (pytest) and modern web‑test tools (Playwright, Cypress). Hands‑on experience with cloud‑based infrastructure and CI/CD pipelines (GitHub Actions, Jenkins, CircleCI). Location This role follows a hybrid schedule, requiring in-office presence 3 days per week. Please note, fully remote is not an option. Office locations: Toronto Why Join Cerebras People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras: Build a breakthrough AI platform beyond the constraints of the GPU. Publish and open source their cutting-edge AI research. Work on one of the fastest AI supercomputers in the world. Enjoy job stability with startup vitality. Our simple, non-corporate work culture that respects individual beliefs. Read our blog: Five Reasons to Join Cerebras in 2025. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Software Engineer
Software Engineering
2025-09-13 12:02
Backline Manager (Apache Spark™)
Databricks
5000+
Netherlands
Remote
false
P-1455 Job Description At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer-obsessed — we leap at every opportunity to tackle technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. And we're only getting started. About the Team The Backline Engineering Team serves as the critical bridge between Engineering and Frontline Support. We handle complex technical issues and escalations across the Apache Spark™ ecosystem and the Databricks Platform stack. With a strong focus on customer success, we are committed to delivering exceptional customer satisfaction by providing deep technical expertise, proactive issue resolution, and continuous improvements to the platform. We emphasize automation and tooling to enhance troubleshooting efficiency, reduce manual efforts, and improve the overall supportability of the platform. By developing smart solutions and streamlining workflows, we drive operational excellence and ensure a seamless experience for both customers and internal teams. The impact you will have Hire and develop top talent to build an outstanding team. Mentor engineers, provide clear feedback, and develop future leaders in the team. Establish and maintain high standards in troubleshooting, automation, and tooling to improve efficiency. Work closely with Engineering to enhance observability, debugging tools, and automation, reducing escalations. Collaborate with Frontline Support, Engineering, and Product teams to improve customer escalations and support processes. Define a long-term roadmap for Backline, focusing on automation, tool development, bug fixing and proactive issue resolution. Take ownership of high-impact customer escalations by leading critical incident response during Databricks runtime outages and major incidents. Participate in weekday and weekend on-call rotations, ensuring fast and effective resolution of urgent issues. Balance real-time escalations with day-to-day planning, and multitasking efficiently to drive operational excellence and provide top-tier support for mission-critical customer environments. What We Look For: 10-12 years of experience in Big Data/Data warehousing eco-system with expertise on Apache Spark™, with at least 4+ years in a managerial role. Proven ability to manage and mentor a team of Backline Engineers, guiding career development Strong technical expertise in Apache Spark™, Databricks Runtime, Delta Lake, Hadoop, and cloud platforms (AWS, Azure, GCP) to troubleshoot complex customer issues. Ability to oversee and drive customer escalations, ensuring seamless coordination between Frontline Support and Backline Engineering. Experience in designing and developing best practices, runbooks/playbooks, and enablement programs to improve troubleshooting efficiency. Strong automation mindset, identifying tooling and process gaps, and leading efforts to build scripts and automated tools to enhance support operations. Skilled in collaborating with Engineering and Product Management teams, contributing to support readiness programs and shaping product supportability improvements. Experience in building monitoring and alerting mechanisms, proactively identifying long-running cases and driving early intervention. Ability to handle critical technical escalations, providing deep expertise in architecture, best practices, product functionality, performance tuning, and cloud operations. Strong interviewing and hiring capabilities, identifying and recruiting top Backline talent with expertise in big data and cloud ecosystems. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
DevOps Engineer
Data Science & Analytics
Data Engineer
Data Science & Analytics
2025-09-13 12:02
No job found
Your search did not match any job. Please try again