Technology company who are evolving and democratizing ADAS
Led by engineers from Tesla’s core Computer Vision team
Opportunity to get in as they go through their Series C funding round
What you’ll do:
Integrate and implement real-time mapping and localization algorithms for outdoor applications.
Estimate the 3D location and trajectory of traffic objects in a monocular camera setup.
Simulate and test the algorithm in a ROS (Robot Operating System) environment.
Deploy a computationally optimized system and achieve real-time performance at embedded target platforms.
Explore a new approach for visual SLAM systems leveraged from deep learning.
What you’ll need to succeed:
MS or Ph.D. degree in Computer Science, Electrical Engineering, or equivalent field.
5+ years of working experience in relevant fields such as computer vision, machine learning, and robotics.
Strong programming skills in C++ and experience with Python.
Solid understanding of an entire pipeline of real-time visual SLAM, including camera model, feature tracking, epipolar geometry, triangulation, and bundle adjustment.
Wide experience with OpenCV and open-source SLAM libraries.
Experience with bundle adjustment optimization libraries (e.g. g2o or ceres).
Experience with sensor fusion with multiple sensors such as camera, odometry, IMU, radar, lidar, etc.
Experience with ROS or other alternative middlewares.
Experience with utilizing deep learning-based object detection and segmentation.