Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
2 results
Search Results
Item Real time Visual SLAM for mobile robot navigation and object detection : a thesis presented in partial fulfilment of the requirement for the degree of Doctor of Philosophy in Engineering at Massey University, Albany, New Zealand(Massey University, 2017) Jing, ChangjuanThis study developed a real-time Visual Simultaneous Localization and Mapping (SLAM) method for mobile robot navigation and object detection (SLAM-O), in order to establish the position of a mobile robot and interesting objects in an unknown indoor environment. The VEX Robotics Competition (VRC) is one of the largest, fastest growing educational programmes in the world, and it is designed to increase student interest and involvement in Science, Technology, Engineering, and Mathematics (STEM). This study aims to enable an autonomous robot to compete with human participants in the VRC match, where robots are programmed to respond to a user’s remote control. To win the competition, the robot needs to optimise between detection of goal objects, navigation and scoring. This thesis presents a Visual SLAM technique for robot localization and field objects’ mapping, and aims to provide an innovative and practical approach to control robot navigation and maximise its scoring. For visual observation, this study consists of an evaluation and comparison of widely used RGB-D cameras. Also, this thesis describes the integration of an iterated video frames module with the Extended Kalman Filter (EKF) for accurate SLAM estimation; where, a new frame selection method is employed. A novel SLAM-O method is developed for detecting objects in a robot’s navigation process. The SLAM-O method uses a new K-Means-based colour identification method for semi-transparent object detection and a new concave-based object separation method for multi-connected objects, which outperform traditional methods. Through conducting an investigation into RGB-D cameras’ performance, in terms of repeatability and accuracy, colour images and depth point clouds accuracy from different cameras are evaluated and compared. These comparison’s results provide a reference for choosing a camera for robot localisation. Depth errors and covariance are obtained from the investigation. The obtained results provide important parameters for a RGB-D camera related computation, such as a SLAM problem, etc. An Extended Kalman Filter (EKF) based Visual SLAM method and an iterated video frames module integrated in the EKF are developed. The Visual SLAM method can handle feature detection and corresponding depth measurements in an efficient way, and the iterated video frames module is capable of maximise robot self-localization accuracy. Experimental results demonstrate the accuracy of states estimate. These two method enables a mobile robot navigates accurately in an indoor environment with low computation cost. In addition, this study also presents a SLAM-O method by integrating object detection into Visual SLAM. SLAM-O enables robots to locate objects of interest, which can be used in robotic applications, such as navigation, object grasp, etc. To locate objects which are semi-transparent or closely connected, a K-Means method to clustering pixels on a semi-transparent object’s surface and a concave based method for object separation are developed. Experimental results prove the methods efficiency. These two methods are efficient and useful tools for object detection in SLAM-O framework.Item Specific object recognition using iso-luminal contours : a thesis presented in fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Palmerston North, New Zealand(Massey University, 2013) Howarth, John WilliamObject recognition is a broad topic in the study of computer vision. In this case the task of distinguishing between specific instances of various objects is addressed. The ability to perform this task would allow robots to operate in unstructured environments, allowing greater and more efficient automation of many tasks. Techniques currently proposed tend to have low accuracy rates, high processing time, or both. This research seeks to establish a method that can quickly and accurately find instances of objects within a scene. Iso-luminal contours were used to gather the initial data, from which higher level features were extracted. Basic geometric features were used as the intermediate data, consisting of lines, arcs, and lobes (a custom type suited to describe corners). The high level data was a custom type, called blocks; each block contains a few features and describes the spatial relationships between them. The features and blocks are designed to be spatially invariant, so the blocks are directly compared to determine which objects are in a scene. The objectives of this research were not met. The results show the geometric features were not robust to changes in image sets, although they did work well with the image set they were developed with. Unfortunately this means the performance of the subsequent 'block' related steps cannot be established. Most of the work was focussed on this aspect. Future work would entail increasing the robustness of the features part of the algorithm, and then gauging if the block based research is of practical use. It is thought that the research results were poor because feature extraction was poor. It is further thought that the high level analysis has merit.
