Computer Vision Engineer (VIO)
Develop the front-end of the visual inertial odometry (VIO) algorithmic stack including matching between frames and stereo pairs, calibration of cameras intrinsic and extrinsic parameters, and detection of obstruction. Implement and optimize the algorithmic stack for embedded platforms. Conduct testing, validation, and monitoring of algorithms in simulation and real-world environments, and develop inspection and monitoring tools. Collaborate closely with system engineers, optical engineers, and software engineers, and communicate findings effectively to stakeholders.
Computer Vision Engineer
The responsibilities include conducting research on state-of-the-art Computer Vision methodologies and participating in the creation and curation of training and validation datasets. Performing statistical analyses and developing visualization tools to ensure data quality. Building and refining training pipelines and metrics to enhance model performance. Developing and optimizing Computer Vision algorithms for multiple robotics/aerospace projects. Implementing ML/CV models into production-ready environments, ensuring seamless integration with Harmattan AI’s systems, and conducting rigorous code reviews. Testing algorithms in real-world environments, developing monitoring tools, tracking model performance, and continuously improving deployed solutions. Working closely with software and simulation teams to align development with system requirements and communicating findings effectively to stakeholders.
Senior SLAM Engineer
Design and implement state-of-the-art SLAM algorithms for real-time localization and mapping using multi-modal sensor inputs such as cameras, IMUs, GPS, and wheel encoders. Develop robust online and offline state estimation methods for complex urban and highway environments. Focus on 3D geometric vision problems including VSLAM, VIO, SfM, and scene reconstruction. Implement robust motion estimation, feature matching, loop closure, and map optimization pipelines. Apply non-linear optimization and filtering techniques like bundle adjustment, graph SLAM, and EKF to maximize system accuracy and robustness. Collaborate with sensor calibration and perception teams to improve system performance and consistency. Evaluate and benchmark system performance using large-scale datasets and real-world driving scenarios. Contribute to system integration, continuous validation, and deployment of SLAM modules on autonomous vehicle platforms. Mentor junior engineers and contribute to technical leadership within the team.
Robotics Engineer
As a Software Engineer in the Robotics and Automation group, you will design and deploy systems to automate material science research and discovery laboratories, specializing in robotics, automation, and perception software development. You will architect and develop software systems that control and orchestrate robotic workcells for autonomous materials experimentation, design scalable control frameworks for flexible automation involving robots, motion systems, sensors, and lab instruments. Your role involves collaborating with hardware, mechatronics, and science teams to translate experimental workflows into reliable automated processes; building and maintaining APIs and services for scheduling, execution, monitoring, and data capture; developing simulation, testing, and validation tools to accelerate development and ensure system reliability; integrating 2D and 3D vision systems with robotic manipulation, motion planning, and execution; optimizing system performance, robustness, and throughput under rapid iteration cycles; contributing to technical direction, architecture decisions, and best practices; mentoring junior engineers and helping establish engineering standards; and fostering collaboration and open-mindedness to empower the team to deliver world-class technology at an unprecedented speed.
Perception Engineer
Design, implement, and deploy 2D and 3D vision systems for robotic manipulation, inspection, state verification, and sensor fusion; develop vision-guided automation solutions integrating cameras, lighting, optics, and robots in laboratory and industrial environments; implement perception pipelines for object detection, segmentation, pose estimation, and feature extraction; own camera calibration and system-level accuracy validation; develop novel algorithms for state estimation of fluids and particle flows; integrate vision outputs with robot motion planning, grasping, and task execution; tune and harden vision systems for robustness against variability in materials, reflections, and environmental conditions; collaborate with software, mechatronics, and mechanical teams to translate experimental and operational needs into automated solutions; contribute to technical direction, architecture decisions, and best practices across the robotics, perception, and automation software stack; and bring an attitude of collaboration and open-mindedness to facilitate fearless and creative problem solving that empowers the team to ship world-class technology at an unprecedented speed.
AI Engineer
Design and build scalable, low-latency AI inference microservices for high-volume video processing. Collaborate with the team to build production pipelines for Video Understanding and LLMs, ensuring the model's throughput, cost-efficiency, and integration into the core backend. Ensure all code, whether written or AI-generated, is modular, type-safe, thoroughly tested, and maintainable. Profile and optimize Python/C++ code and model inference layers using methods like quantization, batching, and caching to reduce GPU costs and user wait time. Conduct research on cutting-edge LLMs/multimodal models and rapidly refactor experimental code into stable, production-ready features.
AI Engineer
Design and build scalable, low-latency AI inference microservices for high-volume video processing. Collaborate with the team to build production pipelines for Video Understanding and LLMs, focusing on model throughput, cost-efficiency, and backend integration. Ensure all code is modular, type-safe, thoroughly tested, and maintainable, using AI-assisted coding tools as needed. Profile and optimize Python/C++ code and model inference layers to minimize GPU costs and user wait time. Conduct research on cutting-edge LLMs/multimodal models and rapidly refactor experimental code into stable, production-ready features.
Chief Engineer, Autonomy (R4405)
The Chief Engineer at Shield AI is responsible for solving complex technical challenges in deploying advanced autonomy solutions on Unmanned Aircraft Systems (UAS). They serve as the chief authority on system architecture, design, development, risk mitigation, and product quality to ensure successful integration of Hivemind Autonomy across various aircraft. This role involves leading a team of engineers to deliver autonomous capabilities for business-to-business and defense contracts. Responsibilities include serving as the Chief Engineer on projects focused on autonomy solutions for unmanned aircraft, leading a team to advance Hivemind Autonomy and define DoD autonomy architectures, assigning technical objectives, making key engineering decisions, ensuring quality and completeness of technical deliverables, providing technical leadership on both IRAD initiatives and DoD contracts, and contributing to government contract proposal writing.
Software Engineer (Ray Core)
As a member of the Ray Core team, responsibilities include developing and maintaining the Ray C++ backend components such as the distributed scheduler, language runtime integration, I/O and memory subsystems. The role involves ensuring the reliability, scalability, and performance of Ray, supporting higher level libraries and use cases. Tasks include optimizing performance of large-scale workloads on Ray, working on stability and stress testing infrastructure, improving fault tolerance, developing high quality open source software to simplify distributed programming, identifying and implementing architectural improvements to Ray core, enhancing the testing process for smoother releases, and communicating work through talks, tutorials, and blog posts.
Research-Hardware Codesign Engineer
The Research-Hardware Codesign Engineer is responsible for working at the intersection of model research and silicon/system architecture to shape the numerics, architecture, and technology decisions for future OpenAI silicon. Responsibilities include building on the roofline simulator to track workloads and analyze the impact of system architecture decisions, debugging discrepancies between performance simulations and real measurements with clear communication of root causes, writing emulation kernels for low-precision numerics and lossy compression schemes, prototyping numeric modules through RTL synthesis, and occasionally owning an RTL module end-to-end. The engineer will proactively bring in new machine learning workloads to prototype and evaluate opportunities or risks, understand the full scope from ML science to hardware optimization, break down objectives into near-term deliverables, facilitate cross-team collaborations, and clearly communicate design tradeoffs with supporting evidence.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Need help with something? Here are our most frequently asked questions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.