Fusion-SLAM by combining RGB-D SLAM and Rat SLAM : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Albany, New Zealand

Loading...
Thumbnail Image
Date
2016
DOI
Open Access Location
Journal Title
Journal ISSN
Volume Title
Publisher
Massey University
Rights
The Author
Abstract
Robotic Simultaneous Localization and Mapping (SLAM) is the problem of solving how to create a map of the environment while localizing the robot in the map being created. This presents a causality dilemma where the map needs to be created in order to localize the robot, but the robot also needs to be localized in order to create the map. In past research there have been many solutions to this problem ranging from Extended Kalman Filter (EKF) to Graph SLAM systems. There has also been extensive research in bioinspired methods, like ratSLAM implemented in aerial and land-based robots. The different research setups use sensors such as Time of Flight (ToF) e.g. laser scanners and passive devices e.g. cameras. Over the past few years a new type of combined apparatus has been developed by Microsoft called the Kinect. It combines active and passive sensing elements and aligns the data in a way which allows for efficient implementation in robotic systems. This has led to the Kinect being implemented in new research and many studies, mostly around RGB-D SLAM. However these methods generally require a continuous stream of images and become inaccurate when exposed to ambiguous environments. This thesis presents the design and implementation of a fusion algorithm to solve the robotic SLAM problem. The study starts by analysing existing methods to determine what research has been done. It then proceeds to introduce the components used in this study and the Fusion Algorithm. The algorithm incorporates the colour and depth data extraction and manipulation methods used in the RGB-D SLAM system while also implementing a mapping step similar to the grid cell and firing field functions found in the ratSLAM. This method improves upon the RGB-D SLAM’s weakness of requiring a continuous stream and ambiguous images. An experiment is then conducted on the developed system to determine the extent to which it has solved the SLAM problem. Moreover, the success rate for finding a node in a cell and matching its pose is also investigated. In conclusion, this research presents a novel algorithm for successfully solving the robotic SLAM problem. The proposed algorithm also helps improve the system’s efficiency in navigation, odometry error correction, and scan matching vulnerabilities in feature sparse views.
Description
Keywords
Mobile robots, Geographical positions, Robotic Simultaneous Localization and Mapping
Citation