Researcher, Frontier Cybersecurity Risks
As a Researcher for cybersecurity risks, you will design and implement mitigation components for model-enabled cybersecurity misuse that span prevention, monitoring, detection, and enforcement, under the guidance of senior technical and risk leadership. You will integrate safeguards across product surfaces in partnership with product and engineering teams to ensure protections are consistent, low-latency, and scalable with usage and new model capabilities. Additionally, you will evaluate technical trade-offs within the cybersecurity risk domain, propose pragmatic and testable solutions, and collaborate with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and misuse scenarios. You are expected to execute rigorous testing and red-teaming workflows to stress-test the mitigation stack against evolving threats across different product surfaces and iterate based on the findings.
Lead Software Engineer
As a Lead Engineer at Eloquent AI, you will lead the development of AI-powered full-stack applications while overseeing and mentoring other engineers. You will remain hands-on across the stack, take ownership of technical direction, code quality, and delivery standards. Responsibilities include designing and building full-stack applications that power AI-driven workflows for enterprise users, overseeing and reviewing the work of other engineers to ensure high-quality, production-ready code, providing technical guidance, architectural direction, and hands-on support where needed, developing high-performance front-end interfaces for AI agent control, monitoring, and visualization, building scalable backend services that support real-time AI interactions, knowledge retrieval, and automation, working closely with AI researchers and ML engineers to integrate LLMs, RAG, and automation into production-ready systems, establishing engineering best practices across testing, deployment, and performance optimisation, and continuously iterating and refining AI-driven products balancing speed with robustness.
Computational Protein Design
Leverage proprietary generative AI models to design proteins for experimental validation by analyzing protein design problems based on functional requirements, biochemistry, structural biology, and sequence homology; generate and optimize designs for experimental validation; coordinate with lab-based protein engineers to plan and optimize the design process and validation strategy. Analyze and leverage experimental results to improve designs and increase success rates over validation rounds; collaborate with machine learning scientists to fine-tune and prompt models. Act as an effective interface between machine learning model development and experimental validation; capture bioengineering learnings and feedback to the machine learning unit and vice versa; foster a collaborative and innovative environment by proactively finding opportunities to innovate and create clarity and alignment between different units. Contribute to computational tools by helping improve the use, service, and integration of AI models through feedback to software engineers and the foundational machine learning unit; assist in improving data management systems and workflows. Maintain the highest scientific standards with publication-grade work; stay current on developments in synthetic biology; continue building understanding of generative AI and expanded areas of protein and cell biology; participate in knowledge sharing through organizing and presenting at internal reading groups; attend and present at conferences when relevant.
ML Scientist
Contribute to the development of ML models across multiple modalities. Work across the ML stack including model architectures, data curation, model evaluation, training and inference infrastructure, research, and experimentation. Select promising approaches from the literature to pursue and create new approaches where necessary to achieve unique goals.
Staff Strategic Sourcing Manager (Hardware)
Advance inference efficiency end-to-end by designing and prototyping algorithms, architectures, and scheduling strategies for low-latency, high-throughput inference. Implement and maintain changes in high-performance inference engines including kernel backends, speculative decoding, and quantization. Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost. Design and operate reinforcement learning (RL) and post-training pipelines to jointly optimize algorithms and systems where most of the cost is inference. Make RL and post-training workloads more efficient with inference-aware training loops such as asynchronous RL rollouts and speculative decoding. Use these pipelines to train, evaluate, and iterate on frontier models on top of the inference stack. Co-design algorithms and infrastructure to tightly couple objectives, rollout collection, and evaluation with efficient inference, identifying bottlenecks across the training engine, inference engine, data pipeline, and user-facing layers. Run ablations and scale-up experiments to understand trade-offs between model quality, latency, throughput, and cost, and feed insights back into model, RL, and system design. Own critical systems at production scale by profiling, debugging, and optimizing inference and post-training services under real production workloads. Drive roadmap items requiring engine modification including changing kernels, memory layouts, scheduling logic, and APIs. Establish metrics, benchmarks, and experimentation frameworks for rigorous validation of improvements. Provide technical leadership by setting technical direction for cross-team efforts, and mentor engineers and researchers on full-stack ML systems work and performance engineering.
Data Scientist, Integrity Measurement
The data scientist will own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals. They will develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, build metrics suitable for goaling or A/B tests when prevalence or other top line metrics are not appropriate, and own dashboards and metrics reporting for harm verticals. They will conduct analyses and generate insights to inform improvements to review, detection, or enforcement, and influence safety roadmaps. The role involves optimizing LLM prompts for measurement purposes, collaborating with other safety teams to understand key safety concerns and create relevant policies, providing metrics for leadership and external reporting, and developing automation to scale their work using agentic products. The position may involve resolving urgent escalations outside normal work hours and may require working with sensitive content including sexual, violent, or otherwise disturbing material.
Principal Engineer, AI Model LifeCycle
The Principal Software Engineer for the Model LifeCycle team is responsible for managing fine-tuning systems for large foundation models, including multi-node orchestration, checkpointing, failure recovery, and cost-efficient scaling. They implement and maintain end-to-end training pipelines for Large Language Models, distillation and reinforcement learning pipelines, and agent execution infrastructure. Additionally, they manage dataset, model, and experiment management including versioning, lineage, evaluation, and reproducible fine-tuning at scale. The role involves close collaboration with product, business, and platform teams to shape core abstractions and APIs, influence long-term architectural decisions around training runtimes, scheduling, storage, and model lifecycle management, and contribute to the open-source LLM ecosystem. This role offers significant ownership in designing and building core systems from first principles.
Design Director
As an Automotive and Robotics SoC Architect, you will define scalable, top-down system architectures that unify CPU and AI technologies for next-generation automotive applications. This role involves shaping the architectural direction of the automotive and robotics portfolio to ensure products meet the industry's high standards for performance, safety, reliability, and security. The position requires strong technical leadership, systems thinking, and cross-functional collaboration to deliver world-class automotive solutions.
Machine Learning Engineer, TTS Systems
As an ML Engineer focused on Text To Speech (TTS), you will own the deployment, optimization, and maintenance of production TTS systems. Responsibilities include deploying and optimizing large-scale TTS models into production environments for reliable, low-latency inference; implementing and refining post-training and modern inference techniques to maximize throughput and audio quality; collaborating with cross-functional teams to ensure seamless rollout, A/B testing, and iterative improvement of production models; maintaining high availability and scalable infrastructure for multi-speaker, expressive, and controllable TTS use cases; and designing and documenting best practices for efficient TTS inference and system reliability.
Machine Learning Researcher, Audio
As a Machine Learning Researcher at Bland, your responsibilities include building and scaling next-generation text-to-speech (TTS) systems by designing and training large scale models capable of expressive, controllable, and human-sounding output, developing neural audio codec-based TTS architectures for efficient and high-fidelity generation, improving prosody modeling, question inflection, emotional expression, and multi-speaker robustness, and optimizing for real-time, low-latency inference in production. You will advance speech-to-text modeling by building and fine-tuning large scale ASR systems robust to accents, noise, telephony artifacts, and code switching, leveraging self-supervised pretraining and large-scale weak supervision, and improving transcription accuracy for real-world enterprise scenarios including structured extraction and conversational nuance. You will pioneer neural audio codecs by researching and implementing neural audio codecs that achieve extreme compression with minimal perceptual loss, exploring discrete and continuous latent representations for scalable speech modeling, and designing codec architectures that enable downstream generative modeling and controllable synthesis. Additionally, you will develop scalable training pipelines by curating and processing massive audio datasets across languages, speakers, and environments, designing staged training curricula and data filtering strategies, and scaling training across distributed GPU clusters focusing on cost, throughput, and reliability. You will run rigorous experiments by designing ablation studies to isolate the impact of architectural changes, measuring improvements using both objective metrics and perceptual evaluations, and validating ideas quickly through focused experiments that confirm or eliminate hypotheses.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.