Onkar Dabeer1, Radhika Gowaiker2, Slawomir Grzechnik2, Mythreya Lakshman2, Gerhard Reitmayr2, Kiran Somasundaram2, Ravi Teja Sukhavasi3, Xinzhou Wu2, Wei Ding3, Arunandan Sharma3, Sean Lee4
10:30 - 12:00 | Mon 25 Sep | Room 220 | MoAT16
Autonomous vehicles rely on precise high definition 3D maps for navigation. This paper presents the mapping component of an end-to-end system for crowdsourcing precise 3D maps with semantically meaningful landmarks such as traffic signs (6 DOF pose, shape and size) and traffic lanes (3D splines). The system uses consumer grade parts, and in particular, relies on a single front facing camera and a consumer grade GPS. Using real-time sign and lane triangulation on-device in the vehicle, with offline sign/lane clustering across multiple journeys and offline bundle adjustment across multiple journeys in the backend, we construct maps with mean absolute accuracy at sign corners of less than 20 cm from 25 journeys.
No information added