Speech Software Engineer
Lead the design and implementation of a scalable, high-availability voice infrastructure that replaces legacy systems. Build and refine multi-threaded server frameworks capable of handling thousands of concurrent, real-time audio streams with minimal jitter and latency. Deploy robust ASR > LLM > TTS pipelines that process thousands of calls concurrently. Develop robust logic for handling media streams, ensuring seamless audio data flow between clients and machine learning models. Build advanced monitoring and load-testing tools specifically designed to simulate high-concurrency voice traffic. Partner with Speech Scientists and Research Engineers to integrate state-of-the-art models into a production-ready environment.
Senior Staff Systems Engineer
Drive the architectural vision for the GenerativeAgent product by designing and building a highly scalable, multi-agent platform for real-time voice and text customer service experiences across various industries. Act as a technical authority and advisor for multiple engineering teams, develop system design and technical roadmaps, and define communication, state management, and orchestration patterns for multi-agent systems. Design and implement scalable, multi-tenant deployment architectures, own and define system-level SLOs/SLIs focusing on latency, cost-efficiency, and fault tolerance, identify systemic risks with proactive mitigation strategies, partner with Security and Compliance teams to meet regulatory and security requirements, lead post-incident analysis and improvements, and collaborate cross-functionally with Product, Customer Engineering, Site Reliability Engineering, TPMs, and Research to translate business requirements into system designs and productionize ML research. Mentor senior engineers and communicate complex technical concepts to both technical and non-technical stakeholders.
Machine Learning Engineer
As a Machine Learning Engineer at Noetica, you will build ML models and pipelines with scalability and reproducibility as foundational principles, develop NLP systems that can accurately process and understand complex legal language and terminology, and design and implement LLM-based solutions that are well-documented and empower legal professionals to extract valuable insights. You will extend and create reliable model evaluation frameworks to ensure accuracy and reduce model drift or bias, simplify complex ML systems into more manageable solutions, optimize model performance through smart feature engineering and efficient algorithm selection based on actual use cases, and work with security engineers to implement responsible AI practices that protect sensitive data while delivering valuable insights.
Freelance Software Developer (Kotlin) - AI Trainer
As an AI Tutor in Coding specializing in Kotlin development, the responsibilities include designing high-quality technical content, examples, and explanations demonstrating best practices in Kotlin development; collaborating with engineers to ensure accuracy and consistency across code samples, tutorials, and developer guides; exploring modern Kotlin frameworks and tools to create practical, real-world examples for learning and testing; and continuously refining content based on feedback, emerging patterns, and advances in the Kotlin ecosystem. The role also involves contributing to projects aligned with skills by creating training prompts and refining model responses to help shape the future of AI while ensuring technology benefits everyone.
AI Engineer (New Graduate)
As an AI Engineer (New Graduate) at Distyl, you will design, implement, and deploy GenAI applications under the guidance of senior engineers, contributing to prompt design, agent logic, retrieval-augmented generation (RAG), and model evaluation to build full-stack AI applications that deliver measurable business value. You will gain exposure to customer-facing work by shadowing technical conversations and learning how business needs are translated into system design, with opportunities to take on more responsibility in technical decisions and implementation. You will partner with senior engineers to understand customer problems and translate requirements into technical solutions, participate in customer discussions, solution design sessions, and iterative delivery. Additionally, you will help improve Distillery, Distyl’s internal LLM application platform, by building reusable components, tools, and workflows and learn best practices for scalable, maintainable AI infrastructure. You will write clean, well-tested, observable production-quality code that meets reliability, performance, and security standards and learn how production AI systems are monitored, debugged, and improved over time. You will assist with evaluating AI systems across accuracy, latency, cost, and robustness, applying user feedback and metrics to improve system performance. Finally, you will continuously develop your skills in LLMs, software engineering, and AI through mentorship, code reviews, and hands-on project work, learning modern development workflows and deployment practices used in enterprise AI.
AI / ML Solutions Engineer
The AI / ML Solutions Engineer at Anyscale is responsible for designing, implementing, and scaling machine learning and AI workloads using Ray and Anyscale directly with customers. This includes implementing production AI / ML workloads such as distributed model training, scalable inference and serving, and data preprocessing and feature pipelines. The role involves working hands-on with customer codebases to refactor or adapt existing workloads to Ray. The engineer advises customers on ML system architecture including application design for distributed execution, resource management and scaling strategies, and reliability, fault tolerance, and performance tuning. They guide customers through architectural and operational changes needed to adopt Ray and Anyscale effectively. Additionally, the engineer partners with customer MLE and MLOps teams to integrate Ray into existing platforms and workflows, supports CI/CD, monitoring, retraining, and operational best practices, and helps customers transition from experimentation to production-grade ML systems. They also enable customer teams through working sessions, design reviews, training delivery, and hands-on guidance, contribute feedback to product, engineering, and education teams, and help develop reference architectures, examples, and best practices based on real customer use cases.
Machine Learning Engineer, Applied AI
The Machine Learning Engineer is responsible for leading applied AI initiatives by bridging research and product to turn generative models into production features across the first-party app and API. Responsibilities include experimenting rapidly, building rigorous evaluations and datasets, partnering with research, engineering, infrastructure, and product teams to ship reliable and scalable ML systems. They will fine-tune and deploy models for creative use cases such as text-to-image, image-to-text, image enhancement and editing, and multimodal applications. The engineer sets clear success metrics including quality, latency, and cost, and contributes to the safety, monitoring, and reliability of the systems. They lead projects from 0 to 1 that shape Applied AI practices at Ideogram while delivering features that bring value and delight to users.
Senior Software Engineer, Applied AI
As a Software Engineer working on AI systems, responsibilities include playing a foundational role in research, experimentation, and rapid improvement of AI systems to build a capable, reliable AI automation platform used worldwide in mission critical production environments. Tasks involve designing experiments and testing ideas to optimize key internal AI benchmarks, designing and improving evaluation frameworks to accelerate experimentation speed and direction, training, fine-tuning, and optimizing machine learning models, performing rigorous evaluation and testing for model accuracy, generalization, and performance, collaborating and contributing to core product development to enhance platform capabilities, and setting up observability and monitoring systems to safety check model behavior in critical settings.
Software Engineer, macOS Core Product - Intl, Non-USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for a diverse range of use cases. Deploy and operate the core ML inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Software Engineer, macOS Core Product - New York, USA
Work alongside machine learning researchers, engineers, and product managers to bring AI Voices to customers for a diverse range of use cases. Deploy and operate the core machine learning inference workloads for the AI Voices serving pipeline. Introduce new techniques, tools, and architecture to improve the performance, latency, throughput, and efficiency of deployed models. Build tools to provide visibility into bottlenecks and sources of instability and design and implement solutions to address the highest priority issues.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.