Senior Perception Engineer
Quest Global
Job Summary
Quest Global is seeking a Senior Perception Engineer to develop, validate, and deploy robust perception solutions for autonomous vehicles. The role involves implementing sensor fusion using LiDAR, Camera, Radar, and GNSS, developing calibration workflows, optimizing algorithms for embedded systems, and testing modules for edge cases. Candidates will work with C++ and Python, integrate solutions with planning and control subsystems, and contribute to making a positive difference in the world.
Must Have
- Implement, validate, integrate, and deploy robust perception solutions with sensor fusion for autonomous vehicles.
- Experience with LiDAR, Camera, Radar, and GNSS sensors.
- Develop and maintain intrinsic and extrinsic sensor calibration workflows.
- Optimize perception algorithms for real-world deployment on embedded systems.
- Test, validate, and fine-tune perception modules independently.
- Develop robust, production-ready C++ and Python code.
- Proven experience developing perception algorithms (object detection, semantic segmentation, tracking, localization).
- Hands-on knowledge in sensor fusion techniques (Kalman Filters, EKF, UKF, Particle Filters).
- Solid understanding of LiDAR, camera, RADAR, and GNSS sensor data and calibration.
- Knowledgeable in ROS2 software stack and Docker Compose.
- Experience with computational principles using Eigen, Boost, GeographicLib.
- Familiarity with object detection models (YOLO, Faster R-CNN, SSD) and PCL.
Good to Have
- Familiarity with SLAM, visual-inertial localization, or GNSS-INS integration.
- Knowledge of parallel computing (CUDA, OpenCL, OpenMP).
- Prior experience with automotive-grade software development practices.
Job Description
Job Requirements
At Quest Global, it’s not just what we do but how and why we do it that makes us different. With over 25 years as an engineering services provider, we believe in the power of doing things differently to make the impossible possible. Our people are driven by the desire to make the world a better place—to make a positive difference that contributes to a brighter future. We bring together technologies and industries, alongside the contributions of diverse individuals who are empowered by an intentional workplace culture, to solve problems better and faster.
Key Responsibilities:
- Implement, validate, integrate and deploy robust perception solutions with sensor fusion for autonomous vehicles using LiDAR, Camera, Radar and GNSS sensors.
- Elicitation of system requirements to derive ODD and KPI goals and instructions to capture datasets for training, validation and testing.
- Develop and maintain intrinsic and extrinsic sensor calibration workflows
- Build pre-processing and post-processing filters to clean and refine raw sensor data for downstream perception modules.
- Optimize perception algorithms for real-world deployment, balancing accuracy, latency, and computational performance on embedded systems.
- Test, validate, and fine-tune perception modules independently to handle edge cases, unusual driving conditions, and rare corner scenarios.
- Independently design experiments, debug challenging issues, and verify solutions in simulation, test bench and on-vehicle tests.
- Develop robust, production-ready C++ and Python code and integrate it into production pipelines
- Collaborate with cross-functional teams to ensure smooth integration of perception, calibration, and localization functions with planning and control sub-systems.
Work Experience
- Proven experience developing perception algorithms for autonomous vehicles or robotics covering object detection, semantic segmentation, tracking and localization using classic methods (e.g. RANSAC, Euclidean clustering, Occupancy Grid Mapping) and deep learning models (e.g. PointNet++, PointPillars) for Lidar point clouds and radar scans.
- Hands on knowledge in sensor fusion techniques (Kalman Filters, Extended Kalman Filters, Unscented Kalman Filters, Particle Filters) for combining data from LiDAR, camera, radar, and GNSS.
- Strong hands-on programming skills in C++ and Python.
- Solid understanding of LiDAR, camera, RADAR, and GNSS sensor data, their characteristics and calibration mechanisms (intrinsic and extrinsic)
- Knowledgeable in ROS2 software stack and Docker Compose tool for integration and deployment of perception algorithms.
- Experience in implementing computational principles effectively using libraries like Eigen, Boost, GeographicLib etc.
- Familiarity with object detection models (e.g., YOLO, Faster R-CNN, SSD) and point cloud libraries (PCL).
- Good understanding of real-time system constraints and optimization trade-offs for product-level deployment.
- Ability to think independently, handle unusual edge conditions, and fine-tune algorithms for real-world robustness.
- Excellent debugging and problem-solving skills.
Nice to Have:
- Familiarity with SLAM, visual-inertial localization, or GNSS-INS integration.
- Knowledge of parallel computing (CUDA, OpenCL, OpenMP) for real-time acceleration.
- Prior experience with automotive-grade software development practices.