Senior Manager, Precision Navigation and Sensing (R4256)
Lead a team of software developers and sensor experts to develop and field optimize algorithms and sensors for accurate, reliable state estimates enabling autonomous operation of VBAT and XBAT aircraft. Develop and implement advanced sensor algorithms for processing data from IMUs, radar, cameras, GPS, and other sensors. Enhance state estimation algorithms by integrating multi-sensor data to improve accuracy and robustness. Select, characterize, and field precision navigation sensors such as cameras, radar, IMUs, and GPS. Design and implement real-time sensor data processing pipelines. Collaborate with cross-functional teams including software engineers, autonomy researchers, and hardware engineers to ensure seamless integration of state estimation algorithms. Conduct experiments and field tests to validate algorithm performance in real-world conditions. Stay updated with sensor technology and state estimation advancements, applying new techniques to systems.
Signal Processing Intern
As an intern in the DSP team at Zoox, you will be working on the design and implementation of signal processing and machine learning algorithms related to radars, depth cameras, lidars, and audio subsystems. You will collaborate with a team of engineers from diverse backgrounds, working on code, algorithms, and research to create and refine key systems enabling autonomous mobility. The work involves understanding and applying concepts in digital signal processing and algorithm design for radar and lidar processing.
Senior / Staff Software Engineer - Perception 3D Tracking
The role involves defining on-vehicle architecture for producing core tracking results from the Perception stack, working with both the model teams and optimization teams to develop a highly performant and efficient system that can run on vehicle, working with Perception data both on the input and output of machine learned models, and taking tracking output to integrate this into the larger behavioral system in the Autonomy stack.
Senior Software Engineer, Planning & Orchestration
The Senior AI Research Scientist for Vision-guided robotics is responsible for leading the research and development of novel deep learning algorithms to enable robots to perform complex, contact-rich manipulation tasks. The role involves exploring the intersection of computer vision and robotic control by designing systems that integrate visual data to guide physical manipulation beyond simple grasping to sophisticated handling of diverse items. Responsibilities include collaborating with a multidisciplinary team to translate concepts into deployable capabilities on physical industrial robotic hardware, researching and developing deep learning architectures for visual perception and sensorimotor control, designing algorithms for precise manipulation of complex or deformable objects, optimizing and deploying research prototypes onto robotic hardware, evaluating model performance in both simulation and real-world settings to ensure robustness, identifying applications of advanced computer vision and robot learning to industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.
Senior Software Engineer, ML Ops & Infrastructure
As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems for robots to perceive and interact with objects in dynamic environments, creating models that integrate visual data to guide physical manipulation beyond simple grasping. Collaborating with a multidisciplinary team, you'll translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. You will research and develop deep learning architectures for visual perception and sensorimotor control, design algorithms for manipulation of complex or deformable objects with high precision, collaborate with software engineers to optimize and deploy prototypes onto robotic hardware, evaluate model performance in simulations and real-world environments to ensure robustness, identify opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentor junior researchers contributing to the technical direction of the manipulation research roadmap.
Deep Learning Engineer, Perception
As a Senior AI Research Scientist for Vision-guided robotics, the role involves leading the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. Responsibilities include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for precise manipulation of complex or deformable objects, collaborating with software engineers to optimize and deploy research prototypes on physical robotic hardware, evaluating model performance in both simulation and real-world environments for robustness and reliability, identifying opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.
Senior Robotics Software Engineer, Intelligent Factory
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Staff Reinforcement Learning Engineer, Industrial Assembly
Lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks, exploring the intersection of computer vision and robotic control. Design systems allowing robots to perceive and interact with objects in dynamic environments, integrating visual data to guide physical manipulation beyond simple grasping to sophisticated handling. Collaborate with a multidisciplinary team to translate concepts into capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes on physical robotic hardware. Evaluate model performance in simulated and real-world environments to ensure robustness. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to industrial problems. Mentor junior researchers and contribute to technical direction for the manipulation research roadmap.
Software Engineer, Cloud Infrastructure
As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Responsibilities also include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for precise manipulation of complex or deformable objects, collaborating with software engineers to optimize and deploy research prototypes onto physical robotic hardware, evaluating model performance in simulation and real-world environments, identifying opportunities to apply advancements in computer vision and robot learning to industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.
Multi‑Target Tracking & Sensor Fusion Engineer (R4172)
Design, research, and implement state-of-the-art multi-target tracking and data association algorithms. Develop production-quality C++ software for deployed military aviation platforms, ensuring deterministic, real-time performance. Build and maintain comprehensive unit, integration, and system-level tests to validate algorithm correctness and robustness. Enhance and calibrate sensor models in advanced simulation and hardware-in-the-loop (HWIL) environments. Collaborate on feature planning, decomposition, and milestone execution within an agile development framework. Contribute to flight-test planning, performance analysis, benchmarking, and regression evaluation. For principal-level applicants, provide technical leadership, design reviews, algorithmic mentorship, and subject-matter expertise across the autonomy organization.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.