10:00 - 10:30 | Mon 25 Sep | Ballroom Foyer | MoAmPo
Unlike traditional industrial robots, indoor service robots are usually required to own high intelligence. However, high intelligence usually relies on expensive computation. One important solution is to offload the expensive computation to the Cloud. In this paper we focus on a framework and approach of cloud-based visual SLAM for indoor service robots. The integrated system is distributed in a 3-levels Cloud with light-weight tracking, high precision mapping, and dense map sharing. Based on recent excellent algorithms, our system is designed to run a real-time sparse tracking on clients, and a real-time dense mapping and loop closing on cloud servers. Only keyframes are sent to the computing servers to reduce the network pressure. Dense geometric pose estimation beyond feature-based methods is computed to make the system resistant to feature-less indoor scenarios. The camera poses associated with keyframes are optimized on computing servers, and sent back to the client to correct the trajectory drift. We evaluate our system on the TUM datasets and real data captured by our mobile robot in terms of the visual odometry of client sides and dense maps generated on servers. Qualitative and quantitative experiments show our cloud SLAM system is able to bear the network delay of Local Area Network (LAN), and it is an effective solution for indoor service robots.
No information added