AI Perception Engineer Jobs

Discover the latest remote and onsite AI Perception Engineer roles across top active AI companies. Updated hourly.

Check out 13 new AI Perception Engineer opportunities posted on The Homebase

Senior Manager, Precision Navigation and Sensing (R4256)

New
Top rated
Shield AI
Full-time
Full-time
Posted

Lead a team of software developers and sensor experts to develop and field optimize algorithms and sensors for accurate, reliable state estimates enabling autonomous operation of VBAT and XBAT aircraft. Develop and implement advanced sensor algorithms for processing data from IMUs, radar, cameras, GPS, and other sensors. Enhance state estimation algorithms by integrating multi-sensor data to improve accuracy and robustness. Select, characterize, and field precision navigation sensors such as cameras, radar, IMUs, and GPS. Design and implement real-time sensor data processing pipelines. Collaborate with cross-functional teams including software engineers, autonomy researchers, and hardware engineers to ensure seamless integration of state estimation algorithms. Conduct experiments and field tests to validate algorithm performance in real-world conditions. Stay updated with sensor technology and state estimation advancements, applying new techniques to systems.

$228,000 – $342,000
Undisclosed
YEAR

(USD)

Dallas, United States
Maybe global
Onsite

Signal Processing Intern

New
Top rated
Zoox
Intern
Full-time
Posted

As an intern in the DSP team at Zoox, you will be working on the design and implementation of signal processing and machine learning algorithms related to radars, depth cameras, lidars, and audio subsystems. You will collaborate with a team of engineers from diverse backgrounds, working on code, algorithms, and research to create and refine key systems enabling autonomous mobility. The work involves understanding and applying concepts in digital signal processing and algorithm design for radar and lidar processing.

$6,500 – $9,500 / month
Undisclosed
MONTH

(USD)

Foster City, United States
Maybe global
Onsite

Senior / Staff Software Engineer - Perception 3D Tracking

New
Top rated
Zoox
Full-time
Full-time
Posted

The role involves defining on-vehicle architecture for producing core tracking results from the Perception stack, working with both the model teams and optimization teams to develop a highly performant and efficient system that can run on vehicle, working with Perception data both on the input and output of machine learned models, and taking tracking output to integrate this into the larger behavioral system in the Autonomy stack.

$242,000 – $290,000
Undisclosed
YEAR

(USD)

Foster City, United States
Maybe global
Onsite

Senior Software Engineer, Planning & Orchestration

New
Top rated
Intrinsic
Full-time
Full-time
Posted

The Senior AI Research Scientist for Vision-guided robotics is responsible for leading the research and development of novel deep learning algorithms to enable robots to perform complex, contact-rich manipulation tasks. The role involves exploring the intersection of computer vision and robotic control by designing systems that integrate visual data to guide physical manipulation beyond simple grasping to sophisticated handling of diverse items. Responsibilities include collaborating with a multidisciplinary team to translate concepts into deployable capabilities on physical industrial robotic hardware, researching and developing deep learning architectures for visual perception and sensorimotor control, designing algorithms for precise manipulation of complex or deformable objects, optimizing and deploying research prototypes onto robotic hardware, evaluating model performance in both simulation and real-world settings to ensure robustness, identifying applications of advanced computer vision and robot learning to industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View, United States
Maybe global
Onsite

Senior Software Engineer, ML Ops & Infrastructure

New
Top rated
Intrinsic
Full-time
Full-time
Posted

As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems for robots to perceive and interact with objects in dynamic environments, creating models that integrate visual data to guide physical manipulation beyond simple grasping. Collaborating with a multidisciplinary team, you'll translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. You will research and develop deep learning architectures for visual perception and sensorimotor control, design algorithms for manipulation of complex or deformable objects with high precision, collaborate with software engineers to optimize and deploy prototypes onto robotic hardware, evaluate model performance in simulations and real-world environments to ensure robustness, identify opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentor junior researchers contributing to the technical direction of the manipulation research roadmap.

Undisclosed

()

Munich, Germany
Maybe global
Onsite

Deep Learning Engineer, Perception

New
Top rated
Intrinsic
Full-time
Full-time
Posted

As a Senior AI Research Scientist for Vision-guided robotics, the role involves leading the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. Responsibilities include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for precise manipulation of complex or deformable objects, collaborating with software engineers to optimize and deploy research prototypes on physical robotic hardware, evaluating model performance in both simulation and real-world environments for robustness and reliability, identifying opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.

Undisclosed

()

Singapore
Maybe global
Onsite

Senior Robotics Software Engineer, Intelligent Factory

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View, United States
Maybe global
Onsite

Staff Reinforcement Learning Engineer, Industrial Assembly

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks, exploring the intersection of computer vision and robotic control. Design systems allowing robots to perceive and interact with objects in dynamic environments, integrating visual data to guide physical manipulation beyond simple grasping to sophisticated handling. Collaborate with a multidisciplinary team to translate concepts into capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes on physical robotic hardware. Evaluate model performance in simulated and real-world environments to ensure robustness. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to industrial problems. Mentor junior researchers and contribute to technical direction for the manipulation research roadmap.

Undisclosed

()

Munich, Germany
Maybe global
Onsite

Software Engineer, Cloud Infrastructure

New
Top rated
Intrinsic
Full-time
Full-time
Posted

As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Your work will involve creating models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. You will collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Responsibilities also include researching and developing deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios, designing algorithms for precise manipulation of complex or deformable objects, collaborating with software engineers to optimize and deploy research prototypes onto physical robotic hardware, evaluating model performance in simulation and real-world environments, identifying opportunities to apply advancements in computer vision and robot learning to industrial problems, mentoring junior researchers, and contributing to the technical direction of the manipulation research roadmap.

Undisclosed

()

Munich, Germany
Maybe global
Onsite

Multi‑Target Tracking & Sensor Fusion Engineer (R4172)

New
Top rated
Shield AI
Full-time
Full-time
Posted

Design, research, and implement state-of-the-art multi-target tracking and data association algorithms. Develop production-quality C++ software for deployed military aviation platforms, ensuring deterministic, real-time performance. Build and maintain comprehensive unit, integration, and system-level tests to validate algorithm correctness and robustness. Enhance and calibrate sensor models in advanced simulation and hardware-in-the-loop (HWIL) environments. Collaborate on feature planning, decomposition, and milestone execution within an agile development framework. Contribute to flight-test planning, performance analysis, benchmarking, and regression evaluation. For principal-level applicants, provide technical leadership, design reviews, algorithmic mentorship, and subject-matter expertise across the autonomy organization.

$270,000 – $400,000
Undisclosed
YEAR

(USD)

San Diego, United States
Maybe global
Onsite

Want to see more AI Perception Engineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for AI Perception Engineer jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does a AI Perception Engineer do?","answer":"AI Perception Engineers research, design and develop algorithms that help machines understand their environment through sensors like cameras, LiDAR and radar. They work on object detection, tracking, classification, scene understanding, and sensor fusion algorithms. Their responsibilities include prototyping systems, developing data pipelines, optimizing models for deployment, and conducting performance analysis. They typically work in autonomous vehicles, robotics, or computer vision applications while staying current with research advancements."},{"question":"What skills are required for AI Perception Engineer?","answer":"Successful AI Perception Engineers need strong programming skills in Python and C++, experience with computer vision libraries like OpenCV, and proficiency in deep learning frameworks like PyTorch. They should understand sensor technologies (cameras, LiDAR, radar), multi-object tracking algorithms, and sensor fusion techniques. Problem-solving abilities, data analysis expertise, and experience with simulation environments are highly valuable. Additionally, knowledge of deployment tools such as Docker and AWS enhances their effectiveness."},{"question":"What qualifications are needed for AI Perception Engineer role?","answer":"Most AI Perception Engineer positions require a Master's or PhD in Computer Science, Electrical Engineering, Computer Vision, or related fields. Employers typically look for 2-4 years of relevant experience, though senior roles may require 4+ years. A Bachelor's degree with at least 3 years of industry experience may suffice in some cases. Demonstrated expertise in machine learning, computer vision, and sensor calibration is essential, along with a portfolio showing experience with perception algorithms."},{"question":"What is the salary range for AI Perception Engineer job?","answer":"The research provided doesn't contain specific salary information for AI Perception Engineers. Compensation typically varies based on education level (Master's vs. PhD), years of experience (entry-level to senior), geographic location, company size, and specific industry (autonomous vehicles, robotics, etc.). Specialized knowledge in areas like sensor fusion, multimodal perception, and deployment experience can command premium compensation in this highly specialized field."},{"question":"How long does it take to get hired as a AI Perception Engineer?","answer":"The research doesn't specify typical hiring timelines for AI Perception Engineers. The hiring process likely includes technical assessments of algorithm development skills, computer vision knowledge, and possibly coding challenges related to perception problems. With entry-level positions typically requiring at least 18 months of working experience plus a Master's degree, or a Bachelor's with 3+ years of relevant experience, candidates should expect a competitive and thorough evaluation process."},{"question":"Are AI Perception Engineer job in demand?","answer":"AI Perception Engineer jobs appear to be in demand based on the diverse requirements across autonomous vehicles, robotics, and computer vision applications. Companies are actively seeking candidates with specialized skills in algorithm development, sensor fusion, and perception systems. The field's technical complexity, requiring both theoretical knowledge and practical implementation skills, creates ongoing demand for qualified engineers. As perception systems become critical in more industries, this specialized ML role continues to grow in importance."}]