AI Perception Engineer Jobs

Discover the latest remote and onsite AI Perception Engineer roles across top active AI companies. Updated hourly.

Check out 13 new AI Perception Engineer opportunities posted on The Homebase

Service Technician Associate I - Pittsburgh, PA (Contract)

New
Top rated
Latitude AI
Full-time
Full-time
Posted

Develop tools for validation and regression testing of image sensors, image processing pipelines, and hardware and software integration. Perform lab and real-world camera data collection and data analysis. Participate in tuning of sensor parameters and image processing pipelines to optimize image quality. Troubleshoot camera and image quality issues observed on autonomous vehicles. Design new hardware and the necessary software for sensor range. Work with perception software team to assess end to end camera performance.

$163,611 – $199,920
Undisclosed
YEAR

(USD)

Pittsburgh or Palo Alto or Dearborn, United States
Maybe global
Hybrid

Senior Manager, Perception

New
Top rated
Zoox
Full-time
Full-time
Posted

Lead high-impact Perception teams, managing technical roadmap and milestone goals. Collaborate with AI and software leaders, simulation, systems design, and mission assurance teams to deliver a dynamic objects perception system across various sensor modalities and perception pipelines. Build and lead a group of managers and engineers responsible for roadmap, productivity, execution, and impact. Set vision for and grow a team of software engineers involved in planning, execution, and success of complex technical projects, providing technical leadership. Collaborate across teams to brainstorm and accelerate perception capability development. Provide summaries, progress updates, and recommendations to executive leadership. Establish best practices and statistical rigor around data-driven decision-making. Stay updated on industry and academic trends in AI and perception.

$277,000 – $389,000
Undisclosed
YEAR

(USD)

Foster City, United States
Maybe global
Onsite

Intern Robotics Software Engineer

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities that can be deployed on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms that enable robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View, United States
Maybe global
Onsite

Intern: Software Engineer

New
Top rated
Intrinsic
Intern
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Mountain View
Maybe global
Onsite

Senior Software Engineer, Pilots

New
Top rated
Haydenai
Full-time
Full-time
Posted

As a Senior Software Engineer on the Pilots team, the responsibilities include delivering robust, thoroughly tested, and maintainable C++ code for edge and robotics platforms, designing, implementing, and owning prototype perception systems that may transition into production-grade solutions, constructing and refining real-time perception pipelines including detection, tracking, and sensor fusion, adapting and integrating ML and CV models for Hayden-specific applications, driving technical decision-making balancing prototyping speed with production readiness, collaborating with the Product team and cross-functional Engineering departments, and contributing to shared infrastructure, tooling, and architectural patterns as pilots mature into foundational products.

$200,454 – $260,590
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Remote

Machine Learning Engineer - Perception Mapping (copy)

New
Top rated
Zoox
Full-time
Full-time
Posted

As a software engineer on the perception mapping team at Zoox, you will curate, validate, and label datasets for model training and validation. You will research, implement, and train machine learning models to perform semantic map element detection and closely collaborate with validation teams to formulate and execute model validation pipelines. You will integrate models into the greater onboard autonomy system within compute budgets. Additionally, you will serve as a technical leader on the team, maintaining coding and ML development best practices and contributing to architectural decisions.

$189,000 – $227,000
Undisclosed
YEAR

(USD)

Foster City, United States
Maybe global
Onsite

Senior SLAM Engineer

New
Top rated
42dot
Full-time
Full-time
Posted

Design and implement state-of-the-art SLAM algorithms for real-time localization and mapping using multi-modal sensor inputs such as cameras, IMUs, GPS, and wheel encoders. Develop robust online and offline state estimation methods for complex urban and highway environments. Focus on 3D geometric vision problems including VSLAM, VIO, SfM, and scene reconstruction. Implement robust motion estimation, feature matching, loop closure, and map optimization pipelines. Apply non-linear optimization and filtering techniques like bundle adjustment, graph SLAM, and EKF to maximize system accuracy and robustness. Collaborate with sensor calibration and perception teams to improve system performance and consistency. Evaluate and benchmark system performance using large-scale datasets and real-world driving scenarios. Contribute to system integration, continuous validation, and deployment of SLAM modules on autonomous vehicle platforms. Mentor junior engineers and contribute to technical leadership within the team.

$120,000 – $305,000
Undisclosed
YEAR

(USD)

Sunnyvale or San Francisco, United States
Maybe global
Onsite

Robotics Engineer

New
Top rated
Radical AI
Full-time
Full-time
Posted

As a Software Engineer in the Robotics and Automation group, you will design and deploy systems to automate material science research and discovery laboratories, specializing in robotics, automation, and perception software development. You will architect and develop software systems that control and orchestrate robotic workcells for autonomous materials experimentation, design scalable control frameworks for flexible automation involving robots, motion systems, sensors, and lab instruments. Your role involves collaborating with hardware, mechatronics, and science teams to translate experimental workflows into reliable automated processes; building and maintaining APIs and services for scheduling, execution, monitoring, and data capture; developing simulation, testing, and validation tools to accelerate development and ensure system reliability; integrating 2D and 3D vision systems with robotic manipulation, motion planning, and execution; optimizing system performance, robustness, and throughput under rapid iteration cycles; contributing to technical direction, architecture decisions, and best practices; mentoring junior engineers and helping establish engineering standards; and fostering collaboration and open-mindedness to empower the team to deliver world-class technology at an unprecedented speed.

Undisclosed

()

New York, United States
Maybe global
Onsite

Perception Engineer

New
Top rated
Radical AI
Full-time
Full-time
Posted

Design, implement, and deploy 2D and 3D vision systems for robotic manipulation, inspection, state verification, and sensor fusion; develop vision-guided automation solutions integrating cameras, lighting, optics, and robots in laboratory and industrial environments; implement perception pipelines for object detection, segmentation, pose estimation, and feature extraction; own camera calibration and system-level accuracy validation; develop novel algorithms for state estimation of fluids and particle flows; integrate vision outputs with robot motion planning, grasping, and task execution; tune and harden vision systems for robustness against variability in materials, reflections, and environmental conditions; collaborate with software, mechatronics, and mechanical teams to translate experimental and operational needs into automated solutions; contribute to technical direction, architecture decisions, and best practices across the robotics, perception, and automation software stack; and bring an attitude of collaboration and open-mindedness to facilitate fearless and creative problem solving that empowers the team to ship world-class technology at an unprecedented speed.

Undisclosed

()

New York, United States
Maybe global
Onsite

Senior Manager, Precision Navigation and Sensing (R4256)

New
Top rated
Shield AI
Full-time
Full-time
Posted

Lead a team of software developers and sensor experts to develop and field optimize algorithms and sensors for accurate, reliable state estimates enabling autonomous operation of VBAT and XBAT aircraft. Develop and implement advanced sensor algorithms for processing data from IMUs, radar, cameras, GPS, and other sensors. Enhance state estimation algorithms by integrating multi-sensor data to improve accuracy and robustness. Select, characterize, and field precision navigation sensors such as cameras, radar, IMUs, and GPS. Design and implement real-time sensor data processing pipelines. Collaborate with cross-functional teams including software engineers, autonomy researchers, and hardware engineers to ensure seamless integration of state estimation algorithms. Conduct experiments and field tests to validate algorithm performance in real-world conditions. Stay updated with sensor technology and state estimation advancements, applying new techniques to systems.

$228,000 – $342,000
Undisclosed
YEAR

(USD)

Dallas, United States
Maybe global
Onsite

Want to see more AI Perception Engineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for AI Perception Engineer jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does a AI Perception Engineer do?","answer":"AI Perception Engineers research, design and develop algorithms that help machines understand their environment through sensors like cameras, LiDAR and radar. They work on object detection, tracking, classification, scene understanding, and sensor fusion algorithms. Their responsibilities include prototyping systems, developing data pipelines, optimizing models for deployment, and conducting performance analysis. They typically work in autonomous vehicles, robotics, or computer vision applications while staying current with research advancements."},{"question":"What skills are required for AI Perception Engineer?","answer":"Successful AI Perception Engineers need strong programming skills in Python and C++, experience with computer vision libraries like OpenCV, and proficiency in deep learning frameworks like PyTorch. They should understand sensor technologies (cameras, LiDAR, radar), multi-object tracking algorithms, and sensor fusion techniques. Problem-solving abilities, data analysis expertise, and experience with simulation environments are highly valuable. Additionally, knowledge of deployment tools such as Docker and AWS enhances their effectiveness."},{"question":"What qualifications are needed for AI Perception Engineer role?","answer":"Most AI Perception Engineer positions require a Master's or PhD in Computer Science, Electrical Engineering, Computer Vision, or related fields. Employers typically look for 2-4 years of relevant experience, though senior roles may require 4+ years. A Bachelor's degree with at least 3 years of industry experience may suffice in some cases. Demonstrated expertise in machine learning, computer vision, and sensor calibration is essential, along with a portfolio showing experience with perception algorithms."},{"question":"What is the salary range for AI Perception Engineer job?","answer":"The research provided doesn't contain specific salary information for AI Perception Engineers. Compensation typically varies based on education level (Master's vs. PhD), years of experience (entry-level to senior), geographic location, company size, and specific industry (autonomous vehicles, robotics, etc.). Specialized knowledge in areas like sensor fusion, multimodal perception, and deployment experience can command premium compensation in this highly specialized field."},{"question":"How long does it take to get hired as a AI Perception Engineer?","answer":"The research doesn't specify typical hiring timelines for AI Perception Engineers. The hiring process likely includes technical assessments of algorithm development skills, computer vision knowledge, and possibly coding challenges related to perception problems. With entry-level positions typically requiring at least 18 months of working experience plus a Master's degree, or a Bachelor's with 3+ years of relevant experience, candidates should expect a competitive and thorough evaluation process."},{"question":"Are AI Perception Engineer job in demand?","answer":"AI Perception Engineer jobs appear to be in demand based on the diverse requirements across autonomous vehicles, robotics, and computer vision applications. Companies are actively seeking candidates with specialized skills in algorithm development, sensor fusion, and perception systems. The field's technical complexity, requiring both theoretical knowledge and practical implementation skills, creates ongoing demand for qualified engineers. As perception systems become critical in more industries, this specialized ML role continues to grow in importance."}]