Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere without the permission of the Author. Robot Assisted Floor Surface Mapping and Modelling for Prediction of Grind Finish by Scott D. Wilson Submitted to the School of Engineering and Advanced Technology in partial fulfillment of the requirements for the degree of Master of Mechatronics Engineering Mechatronics at Massey University, Auckland April 2018 Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . School of Engineering and Advanced Technology April 14, 2018 Supervised by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dr Khalid Arif Professor Johan Potgieter 2 Robot Assisted Floor Surface Mapping and Modelling for Prediction of Grind Finish by Scott D. Wilson Submitted to the School of Engineering and Advanced Technology on April 14, 2018, in partial fulfillment of the requirements for the degree of Master of Mechatronics Engineering Abstract Mapping and localisation is a fundamental aspect of mobile robotics as all robots must successfully know their surroundings and location for navigation and manipulation. For some navigation tasks, prior knowledge of the 3D environment, in particular the 3D surface profile, can greatly improve navigation and manipulation tasks such as contacting, sensor inspecting, terrain traversability, and modifications. This thesis presents the investigation into the capability of cheap and accessible sensors to capture the floor surface information and assesses the ability for the 3D representation of the floor to be used as prior knowledge for a model. A differential drive robotic platform was developed to perform testing and conduct the research. 2D localisation methods were extrapolated into 3D for the floor capturing process. The robotic system was able to successfully capture the floor surface profile of a number of different type floors such as carpet, asphalt, and a coated floor. Two different types of sensor, a 2D laser scanner and an RGB-D camera, were used for comparison of the floor capture ability. A basic model was developed to estimate the floor surface information. The captured surface was used as prior knowledge for the model, and testing was performed to validate the devised model. The model performed well in some areas of the floor, but requires further development to improve the performance. Further validation testing of the system is required, and the system can be improved by improvement of 3D localisation, minimisation of sensor errors, and further testing into the application. Thesis Supervisors: Dr Khalid Arif, Professor Johan Potgieter 3 4 Acknowledgements I would like to thank a number of people that have helped me succeed in this project, and helped open doors to opportunities along the way. To my mother and father, for their endless support, assistance, encouragement, and willingness to help me in any way, their love is greatly appreciated. To my girlfriend Caitlin, for her support through late nights, long days, and stressful times that has helped me persevere and pushed me to succeed, as well as encouraging me to enjoy the moment and take fun breaks. To my brother and sister, their pleasant distractions and laughs are much appreciated. I would like to offer a massive thanks to my supervisors; Dr Khalid Arif and Prof Johan Potgieter. To Khalid, his insights and discussions has helped push me for excellence and I have learnt alot from him. To Johan, his guidance and vision has taught me alot, and I know I will continue to learn from him as years come by. In addition to this, I would like to recognise all of the staff and students at Massey University’s School of Engineering and Advanced Technology for helping create a supportive and fun work / study environment. A special thanks to Jason Torbet from Polished Concrete for his invaluable expertise and vision for this project, and for his assistance in funding this project. Thanks to Massey University for the Massey University Masterate Scholarship, as well as the Massey College of Sciences Scholar Award that has helped fund my studies over the past year. Thanks to my friends, for keeping me sane and providing distractions from my studies throughout the project. 5 6 Contents 1 Introduction 17 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Literature Review 21 2.1 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Robot Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.1 Mechanical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.2 Software System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3 3D Scanning Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.1 Terrestrial Laser Scanner . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.2 2D Laser Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.3 RGB-D Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.4 Optical Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.5 Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.6 mmWave Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.7 Contact-based Measurement . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.8 3D Sensor Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4 3D Scanning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.1 Stationary Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.2 Stop and Go . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.3 Continuous Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 7 2.4.4 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.1 Surface Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.2 Processing for Surface Analysis . . . . . . . . . . . . . . . . . . . . . . 34 2.5.3 Straight Edge Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.5.4 Waviness Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.5.5 Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.6 Modelling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.6.1 Grinding model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.6.2 Physics Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.6.3 Surface Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.6.4 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.7 Opportunities for Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3 Floor Surface Capture Platform 43 3.1 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Research Platform Development . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.2 Mechanical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.3 Electrical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.4 Software System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.2.5 Floor Profile Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2.6 Path Planning Program . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3 Hardware Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3.1 Localisation Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3.2 Floor Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 Initial Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4.1 Experiment Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4.2 Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.3 Initial Floor Capture Results . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.4 Carpeted Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 8 3.4.5 Workshop Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.6 Asphalt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5 Initial Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5.1 Localisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5.2 2D limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.6 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.6.1 Sensor Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.6.2 Map Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.6.3 2D to 3D Extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.6.4 RealSense Camera Testing . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.6.5 Floor Profile Creation with RGB-D Camera . . . . . . . . . . . . . . . 63 3.7 Improved Experiment Methodology . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.1 Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.2 Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4 Grinding Model from Floor Profile 67 4.1 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2 Important Aspects of Grinding . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3 Model Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3.1 Concrete Grinding Process . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3.2 Initial Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3.3 Initial Model Testing and Results . . . . . . . . . . . . . . . . . . . . . 72 4.3.4 Contact Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3.5 Grinding Head Position . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.6 Critical Analysis of Initial Methodology . . . . . . . . . . . . . . . . . 77 4.3.7 Initial Process Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4 Simplified Global Flatness Model . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4.2 Simplified Model Development . . . . . . . . . . . . . . . . . . . . . . 79 4.4.3 Consideration of End Condition . . . . . . . . . . . . . . . . . . . . . . 82 4.4.4 Adjustable Grinding Head Profile . . . . . . . . . . . . . . . . . . . . . 83 4.4.5 Grinding Path Overlap . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 9 4.5 Overcoming Issues from Assumptions . . . . . . . . . . . . . . . . . . . . . . . 84 4.5.1 Grinding Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.5.2 Floating Head Assumption . . . . . . . . . . . . . . . . . . . . . . . . 85 4.5.3 Controlled Head Assumption . . . . . . . . . . . . . . . . . . . . . . . 86 4.6 Testing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.1 Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.7 Initial Model Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.8 Initial Model Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5 Experiments and Results 93 5.1 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.2 Testing of Improved Floor Surface Capture System . . . . . . . . . . . . . . . 93 5.2.1 Experiment Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.2.2 Improved Floor Capture Results . . . . . . . . . . . . . . . . . . . . . 94 5.3 Validation of Grinding Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.1 Platform Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.3 Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.4 Results for Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.4.1 Floor Profile Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.4.2 Grinding Model Simulation . . . . . . . . . . . . . . . . . . . . . . . . 107 5.4.3 Model Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6 Discussion 113 6.1 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.2 Floor Profile Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.2.1 Sensor Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.2.2 Floor Capture Capability . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.2.3 Surface Thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.2.4 Sources of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.2.5 Sensor Selection and Limitations . . . . . . . . . . . . . . . . . . . . . 118 6.2.6 Justification for Improvements . . . . . . . . . . . . . . . . . . . . . . 119 6.3 Grinding Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 10 6.3.1 Model Assumptions and Justification . . . . . . . . . . . . . . . . . . . 120 6.4 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.4.1 Continuous Wavelet Transform Analysis . . . . . . . . . . . . . . . . . 120 6.4.2 Floor Capture Performance . . . . . . . . . . . . . . . . . . . . . . . . 122 6.4.3 Model Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.4.4 Sources of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.4.5 Model Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.4.6 Model Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.5 System Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.5.1 Point Cloud Registration . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.5.2 Global z Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.5.3 Grinding Model Parameters . . . . . . . . . . . . . . . . . . . . . . . . 127 7 Conclusions and Recommendations 129 7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 7.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 A Floor Surface Mapping using Mobile Robot and 2D Laser Scanner 141 11 12 List of Figures 2-1 Common robot research platforms . . . . . . . . . . . . . . . . . . . . . . . . 23 2-2 Example sensor technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2-3 Surfel surface representation of a 3D room [68] . . . . . . . . . . . . . . . . . 34 2-4 Grind grain shape [80] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2-5 Active grits based on depth of protrusion [80] . . . . . . . . . . . . . . . . . . 37 3-1 Floor sensor adjustable mount . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3-2 Diagram of the mobile robot platform highlighting the laser scanner position and orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3-3 Electrical power and data connection block diagram . . . . . . . . . . . . . . 47 3-4 ROS system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3-5 Created coverage path with vertical machine direction and small grind overlap 49 3-6 Created coverage path with horizontalmachine direction and large grind overlap 50 3-7 NextEngine scan of brick [31] . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3-8 Test surfaces used for mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3-9 Coverage path for scan area . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3-10 Results for carpeted surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3-11 Results for coated asphalt surface . . . . . . . . . . . . . . . . . . . . . . . . . 57 3-12 Results for asphalt surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3-13 Diagram of robot floor measurement on a slope . . . . . . . . . . . . . . . . . 62 3-14 3D extrapolation verification test . . . . . . . . . . . . . . . . . . . . . . . . . 62 3-15 RealSense lighting calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3-16 Corrected laser scan (above) shown with raw laser scan (below) . . . . . . . . 66 4-1 Diagram of grinding head . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 13 4-2 Simulink model of tool positions with respect to time . . . . . . . . . . . . . . 72 4-3 Scratch pattern results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4-4 Simulink simulation with surface manipulation . . . . . . . . . . . . . . . . . 74 4-5 2D Diagram of grinding head surface contact . . . . . . . . . . . . . . . . . . 76 4-6 Example of created surface to be ground . . . . . . . . . . . . . . . . . . . . . 80 4-7 Test surfaces used for mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4-8 Diagram of diamond tool cutting depth . . . . . . . . . . . . . . . . . . . . . 86 4-9 Test 1 example output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4-10 Test 1 example contour output . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5-1 Laser scan results for carpet floor . . . . . . . . . . . . . . . . . . . . . . . . . 95 5-2 RGB-D capture results for carpet floor . . . . . . . . . . . . . . . . . . . . . . 96 5-3 Laser scan results for workshop floor . . . . . . . . . . . . . . . . . . . . . . . 98 5-4 RGB-D capture results for workshop floor . . . . . . . . . . . . . . . . . . . . 99 5-5 Robot platform for floor profile capture, with RealSense field of view highlighted101 5-6 Grinding model validation methodology . . . . . . . . . . . . . . . . . . . . . 102 5-7 Pre-grind floor surface capture results . . . . . . . . . . . . . . . . . . . . . . 105 5-8 Post-grind floor surface capture results . . . . . . . . . . . . . . . . . . . . . . 106 5-9 Simulated floating head floor surface results . . . . . . . . . . . . . . . . . . . 108 5-10 Floor surface pre-grind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5-11 Simulated floor surface post-grind; floating head . . . . . . . . . . . . . . . . . 110 5-12 Experimental post-grind output for comparison . . . . . . . . . . . . . . . . . 111 5-13 Normalised difference between pre-grind surface and post-grind surface . . . . 112 5-14 Normalised difference between pre-grind surface and simulated output surface 112 6-1 Captured high point in carpet floor . . . . . . . . . . . . . . . . . . . . . . . . 115 6-2 Investigation of corresponding high point in carpet floor using straight edge . 115 6-3 Workshop floor with significant bump highlighted . . . . . . . . . . . . . . . . 116 6-4 Capture of workshop floor high point . . . . . . . . . . . . . . . . . . . . . . . 116 6-5 Investigation of high point in workshop floor using straight edge . . . . . . . . 117 6-6 Deviations from flatness, pre-grind floor . . . . . . . . . . . . . . . . . . . . . 122 6-7 Deviations from flatness, post-grind floor . . . . . . . . . . . . . . . . . . . . . 123 6-8 Deviations from flatness, simulated grind floor . . . . . . . . . . . . . . . . . . 123 14 List of Tables 3.1 Sensor Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.1 Time to Complete Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2 Deviation across Simulated Floors using Floating Head Assumption . . . . . . 91 5.1 Deviation across Validation Floors for Pre-Grind, Post-Grind, and Simulation 107 15 16 Chapter 1 Introduction Automation in construction is a growing field, particularly around reducing the level of manual operations and increasing productivity on site. Concrete floor polishing is typically a significantly manual process requiring workers to push heavy grinding machines along the floor. Recent innovations in grinding machine design provide remote control features that increase productivity [64, 55]; however, opportunities for improvement remain. The work is hazardous, producing concrete dust which is potentially damaging to the lungs, resulting in increased risk of silicosis, lung cancer, chronic obstructive pulmonary disease, and kidney disease [78, 4]. In addition, the strenuous labour involved can result in injury. Due to the repetitive nature of the grinding process, this task has the potential to be automated, thus eliminating hazardous conditions and the need for hard labour, and potentially providing a significant increase in productivity. However, before the grinding process can be automated, there are a number of research areas that require consideration. In particular, capturing the profile of the floor to highlight areas to be ground (for accurate flatness control), machine localisation and control, and a control algorithm that can accurately control the grinder in the way an expert operator would. 1.1 Motivation The motivation behind this research is to identify research areas that can be applied to the automated grinding process and to gain further understanding in these areas regarding this particular application. The grinding process is heavily manual and hazardous, and so automating the task can help reduce exposure, harm, and injury to operators. In addition, 17 there is the opportunity to increase productivity whilst decreasing the physical load on operators. For this project, two main areas require consideration: how to capture floor surface profile information, and how to use this information to predict and eventually control the grinding machine to achieve accurate flatness. Current methods for scanning large areas are often expensive and take a considerable amount of time [81]. Whilst 3D laser scanners can provide large point clouds of an area, prices start at approximately $50,000 NZD and so make their use in some applications unfeasible. Therefore, there is an advantage in using cheaper, more accessible sensors for capturing the floor profile in a fast and reliable method. Our approach is to use two cheaper sensors: a 2D laser scanner for accurate positioning and Simultaneous Localisation and Mapping (SLAM), and a second sensor for creating a dense point cloud of the floor. The second sensor used was initially a 2D laser scanner followed by an RGB-D camera. A basic grinding model is required for predicting material removal along an uneven surface. The literature contains many models that have successfully captured the grinding process; however, in many of these models the grind wheel is fixed in position, and so the cut depth is able to be estimated using probabilistic approaches and the known position of the tool (wheel) and workpiece. For floor grinding, the tool (the grinding machine) is not fixed; instead, it moves with the surface of the floor. This results in a non-uniform grind along the floor that is dependent on the floor surface itself. There is opportunity to devise a basic grinding model that can take the floor profile as an input and estimate material removal across that profile. This could provide information that could help control an automated machine to enable it to accurately grind a floor. The aims of this project are to carry out fundamental research into how the floor sur- face profile can be captured, and how this information can be used for concrete grinding applications. 1.2 Objectives To achieve the stated aim of this research, the following objectives have been identified: ∙ Study of existing methods for capturing 3D environment information. ∙ The development of a research platform to apply existing 3D capture methods using a fast and relatively cheap approach. 18 ∙ Conducting of experiments to assess the capability of the research platform in captur- ing the floor surface profile. ∙ Study of grinding process modelling methods and investigation of how they could be applied to the complex concrete floor grinding process. ∙ The development of an appropriate basic grinding model. ∙ Conducting of experiments to validate the basic grinding model and robotic system developed. 1.3 Project Scope The scope of this project is to develop testing methods and identify areas of further research required for the development of an automated concrete grinder or similar. The scope will be restricted to developing a research platform to be used for capturing floor profiles; an automated grinding machine will be considered out of scope for this project. The floor profile capture process will be limited to the use of cheap and accessible sensors for capturing the 3D profile. 3D Terrestrial Laser Scanners (TLS) will not be used due to the substantial cost involved. The grinding model developed will be used to estimate material removal based on constant speed and captured floor profiles. Further development of the grinding model into a more complex system to be used for control is out of scope for this project. In addition, this research focuses on grinding a concrete floor, and so the model will be limited to concrete material. 1.4 Thesis Outline Chapter 1 of this thesis outlines the background information relating to this research, and the motivations, objectives and scope. Chapter 2 describes relevant areas of research and reviews the current literature on these topics. An extensive review of the current state of scanning processes, modelling methods, and robotic research platforms is conducted. Opportunities for improvement are identified, in addition to how proven methodologies can be applied to the process under consideration. Chapter 3 describes the development of a replica robotic research platform and how this platform can be used to scan and model a floor surface profile. Testing methods and initial 19 results are also discussed. From the initial results, the chapter discusses a number of areas for improvement, and how these were then implemented on the robotic platform. Chapter 4 discusses the development of a concrete grinding model that can aid further research into accurate control of the grinding process. Ultimately, the automated grinding process should be able to identify high and low areas of a floor and grind all parts of the floor to the required floor flatness standard. Improvements to the model and justification for the assumptions made are also discussed. Chapter 5 discusses the experiments undertaken with the improved floor capture system and the validation of the concrete model. In addition, the experimental results are given. Chapter 6 is a discussion and review of the methodologies used and the results found. Justification of decisions made throughout the project are included, and future improvements to the system are discussed. Chapter 7 concludes this thesis by discussing successful aspects of this project and offers areas of required future work. 20 Chapter 2 Literature Review 2.1 Chapter Overview The development of a mobile robot for accurate concrete grinding involves the intersection of many areas of research. For the mobile robot to act autonomously, research from autonomous mobile robotics, such as Simultaneous Localisation and Mapping (SLAM), must be utilised. SLAM is important because the robot must know where in the global frame it is in order to successfully perform accurate grinding. Prior knowledge of the floor surface profile can assist with control of the grinding action. Capturing this information requires research into 3D scanning techniques, methodologies, and 3D point cloud processing. Finally, a model of the grinding action is required both to calculate the optimum grinding path and for control so that the mobile robot may successfully achieve its task. This requires research into modelling methods, with particular interest in current grinding models and how they relate to the niche area of concrete grinding. 2.2 Robot Platform A robotic platform is required to conduct experiments and further the research. The plat- form should be able to act autonomously as well as accommodate the required sensors and systems. There are many types of robotic platform that can be used, and each has its own advantages and disadvantages for various applications. The robot platform requirements can be divided into mechanical structure and software system. 21 2.2.1 Mechanical A number of existing robotic platforms are often used for research applications. The three robots discussed here are; Turtlebot, the Pioneer P3-DX Robot, and the Pioneer P3-AT. The Turtlebot (Figure 2-1a) is a differential drive robot designed as a research platform by Melonee Wise and Tully Foote at Willow Garage in November 2010 [32]. The robot is an Open Hardware Design and information required to ensure compatibility with the system is therefore provided under the Robotic Operating System (ROS) REP 119 specification [32]. The Turtlebot platform comes with a Microsoft Kinect RGB-D camera and is ready to perform spatial operations such as 3D mapping and navigation with ROS integration out of the box. The Turtlebot is used in various areas of research due to its modularity and ease of use; for example, 3D Mapping [21]. The Pioneer P3-DX Robot (Figure 2-1b) is a research platform designed for a modular setup with the ability to add, switch, or customize sensors and effectors. The platform has a differential drive setup with embedded controllers and ultrasonic sensors. Pioneer robots also come with an extensive C++ library called ARIA. ROS and MATLAB interfaces with the ARIA library are available, making the robot simple to set up and effective to modify. A laser scanner can be used for implementation of the full 2D navigation stack. The Pioneer robot has been used in many mobile robot applications [10]. Similar to the P3-DX, the Pioneer P3-AT (Figure 2-1c) is a modular research platform designed to greatly accelerate the development process in new research. A large variety of sensors such as vision, GPS, grippers, and obstacle avoidance can easily be added to the platform. The P3-AT differs in that it is four-wheel drive and thus uses skid-steer for control. Skid-steer is different to differential drive in that it relies on the slip of the wheels for turning, and thus, the kinematics of a skid-steer platform is more difficult to estimate [53]. Regardless, this platform has been a popular choice for research due to its versatility and ease of use. 2.2.2 Software System A fundamental component of robot operation is the software used. The software architec- ture is an important aspect that must be considered when designing robots, as it can either accelerate or greatly decelerate development time. The architecture and software used often 22 (a) Turtlebot [32] (b) Pioneer P3-DX robot [2] (c) Pioneer P3-AT robot [1] Figure 2-1: Common robot research platforms depends on the application and the hardware used. Industrial robots tend to have their own manufacturer’s software and operating procedures, while that for less-developed mo- bile robots is largely up to the developer. For some applications, embedded systems and micro-controllers are sufficient control systems for the robot; however, these are limited to basic tasks and little computation. A Central Processing Unit (CPU) is often required for increased computational capability and complex tasks such as localisation. A common software system used in robotics is the Robot Operating System (ROS) [58]. Developed by Willow Garage in 2007, it breaks down into a number of key components: each task is attributed to a node; nodes can communicate via topics and services; and these topics and services are easily remapped to create a highly modular environment. This highly modular environment is particularly useful for research applications where the methodology, hardware, and algorithms used must be flexible. Due to the high degree of modularity, ROS has grown to have a number of community contributors adding to, and supporting, the network. This provides a large spectrum of applications and software available for new users, greatly accelerating development time. ROS provides a complete system, in that robot control can be managed completely from within ROS with relative ease. A number of other robot software systems, such as MATLAB and LabVIEW, provide the required computational and analytical capability for robot control, such as kinematics and path planning; however, these are not a complete, standalone system such as ROS. 23 2.3 3D Scanning Technologies In order to accurately control the grinding process, a 3D representation of the floor surface profile is required as prior knowledge. This requires determination of the 3D geometry of the floor as well as the global position relative to the robot coordinate system. There are a number of different technologies available for creating a 3D map of an environment. These technologies fall into two main categories: 3D sensors to capture the map at once or moving a 2D sensor to capture the map successively. Both of these methods can create detailed 3D depictions of the environment; however, they each have their advantages and disadvantages. The various technologies available will be discussed in detail in the subsequent sections, followed by some conclusions about their suitability for the current application. (a) Faro 3D laser scanner [44](b) Sick LMS291 2D laser scanner [3] (c) RealSense D435 RGB-D camera [45] Figure 2-2: Example sensor technologies 2.3.1 Terrestrial Laser Scanner A Terrestrial Laser Scanner (TLS) (Figure 2-2a) uses either time of flight or phase-based laser scans to create a 3D point cloud of the environment. A time of flight laser works by sending a laser beam to the object and measuring the time for the beam to return to the sensor [65]. Phased-based scanning emits a constant beam of laser energy and measures the phase shift of the returning energy which it uses to calculate the distance [65]. From the distance measurements, the x, y, z position of a measured point can be calculated by recording the yaw and pitch of the TLS system. TLS systems can provide many data points quickly, with some reaching 700,000 points per second [47]. In addition, some laser scanners can achieve sub-millimetre accuracy. TLS systems have successfully been used to map large areas [15, 72, 7, 47]. The TLS 24 output results in dense point clouds of large areas that can then be processed for application, such as terrain traversability [47] and flatness analysis [7]. However, due to their high spec- ifications, these sensors are often very costly and thus not suitable for many mobile robotic applications. In addition, there is a direct trade-off between increased CPU computational capacity and the large number of points required for high resolution. Some TLS laser scan- ners cannot continuously scan and so require a Stop and Go approach, which is a limiting factor for some applications. This is a particular issue when a mobile robot is moving at speeds greater than 0.5 𝑚𝑠−1 where the points begin to suffer from Doppler shifting due to the robot velocity combined with scan time. Krusi et al [47] overcame this limitation through software that corrected individual points based on known robot motion. Whilst TLS provides a robust method for 3D mapping and has been proven through a number of studies, the high cost of these sensors introduces commercial challenges that render it difficult to implement in a number of applications. 2.3.2 2D Laser Scanner A 2D laser scanner (Figure 2-2b) uses similar technology to that of a 3D TLS but only provides a single scan line. Because of this, 2D laser scanners can have similar specifica- tions to 3D TLS systems but are often substantially cheaper, at around US $1000-6000 [3] compared with around US $50,000 for an accurate 3D laser scanner. A 3D map using the 2D scans can be created through two methods: rotating the laser scanner and recording the rotation and scan information to be stitched together, or by moving the 2D laser scanner through the environment and recording position, orientation, and scan information. Due to the affordability of these laser scanners, they have been a popular choice of sensor in mobile robotics over the past 10-20 years, particularly for 2D SLAM [26, 40, 29, 8]. 2D laser scanners are a proven method for creating maps through algorithms such as OpenSlam’s Gmapping [36, 35]. 2.3.3 RGB-D Camera An RGB-D camera (example Figure 2-2c) is a sensor that provides both RGB colour images and per-pixel depth information. RGB-D cameras can be relatively cheap 3D sensor options when compared with laser scanners. There are two main types of these cameras: structured light and time of flight [24]. In the structured light type, the sensor projects an infrared 25 speckle pattern onto the target (for example PrimeSense [79]). The pattern is then captured by an infrared camera in the sensor and compared with reference patterns stored in the device. These known patterns can give an indication of the measured depth. The system then estimates the per-pixel depth based on the reference patterns. This type of sensor is often combined with stereo sensing. The second type uses time of flight sensors to measure the per-pixel depth and associates each generated depth estimate with the RGB pixel. Henry et al. [41] used a PrimeSense [79] RGB-D camera to create dense 3D maps of an indoor environment. They found that accurate integration of the depth and the colour images can help to provide robust frame matching and loop closure, which is particularly useful for SLAM. Endres et al. [30] utilised an RGB-D sensor only to capture the 3D environment, without relying on odometry or external localisation at all. Depth data can be correlated with the RGB camera, yielding an RGB image with a depth associated with each pixel. This is often represented as a depth cloud and converted into a point cloud. The alignment of RGB image with depth means that these sensors can be used for object recognition [10]. However, while they provide fast and cheap 3D point clouds of the environment, they often lack the accuracy, resolution, and range as compared with other technologies. In addition, the use of infrared light renders the cameras susceptible to noise and erroneous readings due to changes in lighting. Recent advancements in sensor technology allow the exposure to be controlled automatically and thus minimise measurement errors due to changes in lighting [45]. 2.3.4 Optical Interferometry Optical Interferometry uses the projection of structured laser light to study the surface of a target. It is often used for the analysis of small parts or objects and has incredible accuracy and resolution. However, Optical Interferometry can be highly sensitive to vibration and temperature, which renders it difficult to implement outside of a controlled lab environment. Gao et al. [33] investigated two methods for minimising the errors due to vibration when using interferometry to measure surfaces in process. The sensor system was able to provide a highly detailed surface analysis of the target object, but had a too small a field of view to be useful for large-area applications. 26 2.3.5 Photogrammetry Photogrammetry uses a series of 2D images to form 3D objects using mathematical relation- ships and triangulation of the images. A number of methods can give insight into the 3D shape of objects, such as shape from shading, shape from texture, shape from specularity, shape from contour, and shape from 2D edge gradients [62]. This method is often low in cost and the instruments are quite portable. However, photogrammetry can fail to capture flat, recurring surfaces such as walls and ceilings with no texture. The photo-matching methods rely on features within the images to create the 3D models, and some practical applications may lack the required level of features. This could be the case with concrete grinding, where the walls and floor could appear relatively consistent and thus may lack significant features to build from. 2.3.6 mmWave Radar Radar is becoming more popular in autonomous driving applications due to its robustness and cost-efficiency [67, 61]. mmWave radar is robust to lighting, temperature, dust and humidity changes, making it a reliable sensor in difficult environments. In addition, radar can penetrate several non-translucent materials, such as thin walls, fog, and snow [67]. Laser scanning is sensitive to dust and thus can give inaccurate results. Similarly, photogrammetry and RGB-D cameras are sensitive to lighting. Belaidi et al. [11] utilised a mmWave radar due to its robust features and created a 3D scan of the terrain for optimal path planning and traversability analysis. 2.3.7 Contact-based Measurement In contrast to the non-contact measurement methods discussed above, stylus-type profilome- ters can provide 2D and 3D surface profiles. Diamond tip styluses are commonly used in industry for measuring quality and conformity of parts. The main advantage is the robust- ness of the process: the tip requires contact with the part for measurement, eliminating the distortions or occlusions that occur with optical measurement methods. However, there remains a systematic error between the real contact point and the measured contact point related to the radius of the stylus tip. Whilst these methods can measure with high accuracy, they require the stylus to be moved over the whole surface; for large surfaces, this could 27 result in long scan times or arrays of styluses to measure a larger area at each pass. 2.3.8 3D Sensor Summary Accurate 3D surface profiles can be achieved using a number of different technologies. Contact-based approaches can achieve high-accuracy and high-resolution 3D profiles in a controlled environment, but are challenging to apply to large dynamic environments. Opti- cal Interferometry can also achieve high accuracies and incredible resolution, however, can be sensitive to vibrations and temperature. 3D TLS systems do not suffer from such sen- sitivity issues and can be deployed in a variety of environments; however, these systems are often very costly and thus unfeasible for some applications. Cheaper options such as 2D laser scanners and RGB-D cameras require additional work, such as moving the sensor through the environment, to create an accurate 3D representation. 2.4 3D Scanning Methods Prior knowledge of the 3D geometry of a floor surface does not provide sufficient information for improved navigation: the floor surface must be related to a global map that the mobile robot can access and relate to. This can be achieved either through having a 3D map of the environment, or by registering the points of the 3D surface to a 2D map of the environment. 2D SLAM is a well-understood area of research in mobile robotics and makes use of a number of well-understood techniques [26, 40, 29, 8]. It can provide a robot with an accurate 2D map for localisation within the environment. 3D SLAM can be achieved through similar approaches, although it is more complicated and thus requires exponential levels of computation [23]. 3D SLAM would be required to create a full 3D map for use by a mobile robot. Methods that utilise cheaper sensors and cheaper computation would be favourable over 3D SLAM. A thorough assessment of the development of SLAM algorithms over the past two decades can be found in the work of Cadena et al. [19]. In the literature, a number of approaches have been used to create a 3D map of the environment. These fall into three main areas: stationary scanning, Stop and Go, and con- tinuous mapping. In stationary scanning, the measurement device is used whilst stationary and often in a single position. In Stop and go scanning, the scanner is moved through the environment and stops at predefined points to perform a scan. Finally, in continuous scan- 28 ning, the sensor moves through the environment continuously capturing data and does not have to be stopped to perform a measurement. These different methodologies are discussed in detail below. 2.4.1 Stationary Scanning A stationary sensor can capture a large amount of information and, due to it being stationary, removes a number of sources of error that occur in other methods, such as vibration and localisation. However, stationary sensors are limited in what they can see, often by line of sight, and in complex environments they can fail to capture all of the available information. Grzelka et al. [37] investigated the use of a stationary 3D scanner for analysing the surface roughness of a concrete slab. The system was able to provide enough information to estimate the surface roughness of the areas. Olofsson et al. [56] utilised a 3D TLS to create 3D scans of tree stems and record height using the RANSAC method. Mobile Laser Scanning (MLS) presents an alternative solution to traditional static ter- restrial laser scanning that can increase efficiency and provide robust 3D representations of the environment. MLS does have some limitations: the mobile system must be accurately localised in the environment, bumps whilst in motion could cause errors, and the 3D res- olution is often dictated by the velocity of the mobile robot and the sampling rate. Zlot et al. [81] suggest that a stationary 3D scanning system, whilst providing a large number of accurate points, relies too heavily on accurate surveyed sensor positions and expensive, time-consuming, systems. 2.4.2 Stop and Go A common method for moving a sensor through an area to capture the total environment is Stop and Go. Stop and Go methods refer to moving the sensor in the environment and stop- ping to perform a measurement. The method can benefit from many of the advantages that stationary scanning provides, whilst also providing a more efficient solution. Stop and Go provides accurately referenced measurements without the vibration effects, velocity-shifted measurements, or localisation update errors that limit the use of some sensor technologies. The sensors used are often full 3D sensors and can take some time to perform the measure- ment. 29 Lin et al. [51] utilised a Stop and Go method to improve the efficiency of a traditional static 3D TLS. They concluded that the Stop and Go approach can provide a more flexible and faster mapping mode compared with TLS, while also providing the stability and high sampling density seen in TLS. Putz et al. [57] utilised a 3D laser scanner to scan uneven terrain for improved path planning using a Stop and Go approach; this method was used to help minimise measurement errors and ensure that accurate point clouds were created to be stitched together later. This was effective and provided detailed point clouds; however, as found by Lin et al. [51], this method resulted in increased scan time and this could make it impractical for some applications. Chow et al. [22] investigated fusing an Inertial Measurement Unit (IMU) with an RGB-D camera for assisting the localisation of a Stop and Go scanning solution. 2.4.3 Continuous Sensing A faster method for thoroughly scanning an environment involves moving the sensor through the environment continuously. The sensor captures data whilst the platform moves, and the data are stitched together to form a 3D representation of the environment. In this methodology, the sensor can be either 2D or 3D as discussed previously. Wang et al. [73] investigated replacing expensive 3D laser scanners with two low-cost 2D laser scanners to create 3D representations of plant structures in indoor environments. The platform supplied accurate referencing and calibration which enabled each 2D profile to be stitched into a full 3D point cloud. The Wang et al. method proved to be a successful approach to reduce costs whilst providing sufficient accurate information for application. Zlot et al. [81] utilised a similar approach to avoid using expensive 3D TLS. They utilised three 2D laser scanners to create a 3D map of a tunnel. One of the laser scanners was placed at 25-degrees to the vertical on a rotating platform with an industrial grade MEMS IMU. This laser scanner created a 3D map of the environment and was used for SLAM and localisation within the tunnel environment. The two other laser scanners were mounted vertically so that a 360-degree view of the tunnel was created. The sensor information was collected and stitched together to form an accurate 3D representation of the tunnel. This method maintained an even distribution of points along the scan line, which would not be the case with a single rotating 2D laser scanner. 30 A key issue with continuous mapping is accurate localisation. Each data collection frame must be referenced to a global frame so that they can be accurately stitched together. This was achieved by Zlot et al. [81] through 3D SLAM from a rotating 2D laser scanner. In contrast, Wen et al. [76] utilised 2D SLAM for positioning using a 2D laser scanner, significantly reducing computational complexity. Banica et al. [9] used two sets of laser- based imaging systems, spatially correlated through the use of proximity sensors, odometry, and geolocation. Both methods were successful in localising the robot, but varied greatly in the computation required; this could be a significant limitation for some applications. A common problem that must be considered in continuous sensing is Doppler-shifted measurements due to the motion of the robot. Droeschel et al. [27] utilised a 3D range finder to approach the problem of simultaneous localisation and mapping. The modelling approach used surface elements to provide efficient and accurate registration of points. A Hokuyo laser scanner was used on a rotating platform with an IMU that compensated for motion during scan acquisition; the mapping was performed continuously. Krusi et al. [47] also recognised the importance of removing distortion from laser data captured during movement, particularly when using a sensor with a long measurement time. Their approach involved correcting measurements from a 3D TLS based on velocity and position information of the moving robot platform. This helped create accurate 3D point clouds of the environment that could then be used for terrain traversability and path planning. 2.4.4 Sensor Fusion As discussed above, many of the sensors have both advantages and disadvantages. Some of these disadvantages can be overcome, or their errors can be minimised, through the use of sensor fusion. In sensor fusion, two or more sensors gather similar information which is then fused together, minimising errors. Wen et al. [76] fused a 2D laser scanner with an RGB-D camera to produce a 3D indoor map. Sensor fusion was used to help improve a previously identified difficulty of insufficient overlapping frames. Wen et al. [76] utilised a fusion-iterative closest point method to align frames consecutively. The 2D laser scanner was used for localisation alongside odometry data. From the localisation, the RGB-D camera data could be aligned and stitched together, creating a 3D map. Chen et al. [21] fused an IMU with Visual SLAM information using an Extended Kalman Filter (EKF) to estimate the pose of a robot in indoor environments. From this known pose, 31 2D laser scans were stitched together to create a 3D map in real time. This solution is limited in that it relies heavily on visual odometry, which can fail if there is a lack of features in the environment. The fusion of the IMU can help overcome this limitation, as can the fusing of additional sensors, such as wheel odometry. Chow et al. [22] similarly investigated fusing an IMU with an RGB-D camera for assisting the localisation of a Stop and Go scanning solution. 2.5 Data Processing Once the surface information has been gathered, it must be processed. While the processing requirements are largely dependent on the particular application, there are a few general methods that can be applied. The data processing in this study will be restricted to deter- mining high points, low points, general slope, and flatness. An aspect of data processing that is widely discussed by researchers and is particularly application-specific is how the data are represented and stored. Storing the entire data can result in huge datasets that are not manageable and may not be efficiently processed. Various data representation techniques used in the field of point clouds, surfaces, and 3D environment are reviewed and analysed below. Once the data is ready for processing, a number of different methods can be applied. The driving factors are the available computational capacity and the end result. Data processing can be used to aid decision making, such as those relating to terrain traversability and surface grinding. 2.5.1 Surface Representation Processing large point clouds of a surface is possible, but is computationally expensive due to the large number of data points. A number of methods have been implemented to reduce the computation required for processing large point clouds, relating in particular to terrain traversability and better representation of surfaces. These aspects are often used for planning in uneven and rough terrain. Endres et al. [30] overcame the limitations of pure point cloud representation by using a 3D occupancy grid approach. The grid utilised the Octree-based mapping framework Octomap [42]. This tree structure provides an efficient way to represent the voxels of a 3D 32 environment, and inherently allows the map of that environment to be stored at multiple resolutions. Common 2.5D approaches are memory-efficient; however, Endres et al. [30] found that they were insufficient for storing a complete environment map and can miss some data. The Octree structure retains the benefits of graph-based data representation where data can be accessed and stored efficiently. Putz et al. [57] built upon previous research into graph-based surface representation to create a triangular mesh of an environment. A triangular mesh representation is suggested to provide a flexible solution that can be used for multiple purposes such as environment representation [38], visualisation of human robot interaction, and ground-based path planning. Krusi et al. [47] used raw point cloud measurements for surface representation and utilised a graph-based submap approach to greatly reduce computation, particularly for navigation of large 3D environments. The number of points in a submap was restricted before being merged into the graph structure. This reduced computation whilst maintaining a high level of resolution and accuracy. The graph-based approach provides a rapid method for searching a total environment for an efficient path to a goal, and additionally provides a computationally efficient way of updating the map, simply by updating the current submap. This reduces the need to compare the point cloud with the enitre environment representation. The graph approach for data representation is said to be a highly scalable system [16]. The submap approach also reduces the effect of localisation drift on the global map. A common surface representation in the recent literature involves the use of multi-level resolution surfaces; the surface is represented by a number of different resolution surfaces, in particular high resolution near the robot and decreasing resolution with distance from the robot. This is beneficial for a number of applications. In 3D localisation utilising feature matching, low-resolution maps can be used to quickly reduce the possible locations of the robot. The high-resolution maps can then be used to refine the robot’s location [27]. This reduces computation as the large number of points is only used in a small area. If the large number of points is used for the entire point cloud area, this would require expensive computation in order to match the features correctly. Droeschel et al. [27] used a surface description approach. First, a 3D laser scan is converted into points in a map. These points are then stored in a multi-resolution grid structure of the map. Cell size increases with distance from the robot, and so reduces the large number of cells traditionally required. Each grid cell is then represented by a surfel 33 that summarises the 3D points in the cell. The surfel provides an indication of the position and orientation of the points at each grid cell position. Stoyanov et al. [68] used the Normal Distribution Transform to create similar surface descriptions. This provided an easy method for distinguishing between floor, walls, and obstacles from the 3D map of the environment, which could then be used for navigation and path planning (Figure 2-3). Figure 2-3: Surfel surface representation of a 3D room [68] 2.5.2 Processing for Surface Analysis In addition to storing and representing the captured 3D information, it must be processed for the specific application. For concrete grinding, the information must be processed to evaluate the floor, particularly its overall flatness. Useful information for concrete grinding include determination of the high and low areas of the floor and any rolling undulations in the floor. These undulations can have a number of different frequencies, and can be seen as a slight change in floor height over a short distance (e.g. 1 m), or even a change in floor height over a long distance (e.g. 10 m). The approaches discussed below are Straight Edge, Waviness Index, and Wavelet Filter. 2.5.3 Straight Edge Method The Straight Edge Method is the simplest and oldest method of analysing surface flatness [18]. It involves placing a 3 m long straight edge on the floor at random locations. The deviation of the floor from the straight edge is measured, and if it is less than a tolerance value then the floor is determined to be in specification. This method is simple and cost- effective, but is prone to errors and only measures the surface waviness at 3 m wavelength. 34 2.5.4 Waviness Index The Waviness Index (WI) is a quantitative analysis for determining floor flatness for meeting building specifications [46]. Measurements are conducted at 1 ft (30 cm) intervals along floor survey lines, and deviations are calculated from the midpoints of imaginary chords defined by pairs [6]. The WI is measured at five intervals (60, 120, 180, 240 and 300 cm) and so covers more frequencies of undulation than the straight edge approach. An advantage of WI is that it expresses the deviation from flatness as a measurement unit (inches or centimetres) and so is simple to comprehend; however, measurements are only performed along the 1D survey lines. The WI could therefore miss undulations of relatively short period (less than 60 cm) [14]. In addition, the WI is difficult to automate, which would limit its use in robotic applications. 2.5.5 Wavelet Transform One method of surface analysis involves applying a Wavelet Transform to the data. A Wavelet Transform is a signal analysis method that is based on convolution of an input signal with the wavelet function at different locations of the signal at multiple scales. This means that a signal pattern of the wavelet function can be detected at any scale and any location. For a floor, this can enable the detection of different patterns and frequencies of undulations, ultimately providing a characterisation of the surface waviness. Valero et al. [72] used a 2D Continuous Wavelet Transform (CWT) to process 3D TLS data for analysing the surface flatness of a concrete floor. The 2D CWT correlated well with results from the current state-of-the-art waviness characterisation method, the Waviness Index (WI). They found that the 2D CWT was able to automatically identify several areas of deviation from flatness, across a range of wavelengths, meaning that the method could identify short peaks in the floor as well as gentle undulations across it. Previous manual methods, such as WI, struggle to achieve this range of analysis. Bosche et al [14] found similar results when applying a 1D CWT to two different floors scanned by a terrestrial laser scanner. When combining all five undulation periods tested, they found a strong correlation between the methods, with an R-squared of 0.84, although this was not as strong as Valero et al.’s [72] value of R-squared of 0.96. These differences in correlation could be explained by the sampling methods used in the WI approach, as the measurement sampling can lead 35 to inaccurate or even failed detection of undulations at specific periods, similar to aliasing. Additionally, Bosche et al. [14] found that due to the CWT output, it was possible to identify both concave and convex undulations in the surface. Alhasan et al. [6] evaluated the performance of two algorithms for processing 3D Station- ary Terrestrial Laser Scanned point clouds into surface maps that characterise roughness. The advantage of these algorithms is that they can provide a 2D analysis of the surface rather than traditional analysis along survey lines (1D). Tang et al. [70] discussed three methods for analysing surface flatness, following a similar three step approach: set up a reference frame; smooth noise and calculate deviations between points and reference; and identify regions deviating from the reference by more than a threshold value. The data was collected by a stationary 3D laser scanner and was able to reveal deviations from flatness in the surface; however, further research into the effects of incident angle and other methodology factors is required. 2.6 Modelling Methods Once a floor profile has been captured, the surface must be analysed to identify the optimum grinding specifications. This can be achieved by using a mathematical model. 2.6.1 Grinding model Grinding is a process that involves many different factors. Research into grinding is vast, and many findings do not apply to concrete grinding, in particular floor grinding. However, some researchers have been successful in mathematically modelling the grinding action [69, 5, 52, 80, 49, 54, 66, 25]. Due to the complexity of the grinding process and the high number of variable factors, many models aim to reduce the complexity of the process to capture what they consider to be important aspects. A common approach to analysing the grinding process is the single grit approach [66, 60]. It is suggested that by analysing the performance and forces involved in a single grit of the grinding process, the model is greatly simplified. Once the single grit approach can be validated, the model can easily be extended to accommodate the entire grind wheel. 36 Single Grit Analysis In general, grinding involves the sequential motion of individual grits cutting at a micron- level depth, which leads to a macroscopic-level removal of material [66]. Therefore, the grinding performance of a grinding wheel can be described by the behaviour of an individual grit. As a result, overall grinding performance is largely influenced by the number of active cutting edges (individual grits), and so the estimation of these active cutting edges is an important aspect. An exact determination is not possible due to a number of stochastic variables such as the distribution of the grits and their position, number, and shape. Ma et al [52] investigated the dynamic behaviour of grinding, in particular the dynamic behaviour between the workpiece and the grinding wheel. Zhang et al. [80] analysed the cutting force and frictional forces of a single grain with respect to the stress state of the grain, and then verified the results using scratch tests. Grain shape is considered to have significant influence and Lee et al. [48] classified each shape into four categories: conical, spherical, rounded table, and four pyramid (Figure 2-4). Wang et al. [74] found that when the cut depth was large, the grains could be regarded as a conical shape, which has become commonly used in the literature. Zhang et al. [80] used a conical shape to distinguish grains that were active compared with grains that were not (Figure 2-5). Figure 2-4: Grind grain shape [80] Figure 2-5: Active grits based on depth of protrusion [80] 37 Stochastic Properties Grinding is a stochastic process, in that it is impossible to ascertain exactly which grits are in exactly what cutting phase at all times. However, probabilistic approaches can help predict the cutting action and grit behaviour of the grinding. Chang and Wang [20] introduced a stochastic grit distribution function to describe the random grit distribution in the rotating wheel. The dynamic grinding force was modelled as a convolution of a single grit force and the grit density function. Setti et al [66] used the Rabinowicz abrasive wear model to estimate the active grits, allowing for the effect of depth of cut, grit size, and workpiece hardness to be considered simultaneously. The simulation of grinding is a complicated process; factors such as grits stochastic distribution, undefined geometry, and unknown number of cutting edges make the process difficult to analyse quantitatively. A particular challenge of concrete grinding is that as the concrete is ground, the concrete dust (removed material) begins to act as an additional abrasive. This makes accurate mod- elling of the concrete grinding process very difficult. Weihs et al. [75] addressed this problem by acknowledging that discrete simulation methods such as finite models cannot be applied due to the abrasive nature of the concrete material. Their approach was to subdivide both the material and the diamond into Delaunay tessellations. The resulting micropart con- nections could then be interpreted as predetermined breaking points, resulting in improved analysis of the surface geometry and cutting edge interaction. Similarly, Raabe et al. [59] proposed a geometrical simulation model that describes the forces affecting the workpiece as well as the chip removal rate and the wear rate of the diamond in process parameters. The model treats both the material and diamond grain as tessellations of microparts connected by predetermined breaking points. The process was then iteratively simulated, with the forces calculated by interpreting the collisions of pairs of workpiece and grain microparts as force impacts. Grinding Action Grinding can be related to traditional milling in that the feed of the material and the speed of the grinding wheel (or head) greatly affect the rate of material removal. However, grinding differs from traditional milling in the method of material removal. Traditional milling relies 38 on chip formation due to the tool edge to remove material at a macro scale. Grinding involves the systematic removal of material at a micro scale resulting, in a change in material at the macro scale. The grinding process is often broken down into different phases of chip formation. Rasim et al. [60] utilised a previously developed single grain scratching method which enabled the observation of chip formation in situ with a high-speed camera. This provided insight into the determination of specific grain engagement depths and transition points for the material (hardened steel in this case). In the study, Rasim et al. [60] aimed to develop a quantitative chip formation model that took the grain shape as well as cutting speed and lubrication into account. Grain shape in both the direction of motion and the transverse direction has significant influence on chip formation. Throughout the grinding process it is common for grains to be considered as sliding, ploughing, and cutting. Zhang et al. [80] divided grains into just two categories, cutting and ploughing, on the basis of experimental results revealing the boundary between sliding and ploughing to be fuzzy and hard to distinguish. Durgumahanti et al. [28] categorised grinding forces into cutting, chip formation force, frictional force, and ploughing force. As the cutting edges of abrasives come into contact with a workpiece, elastic deformation occurs. As they traverse further into the workpiece, this deformation continues. This phase of grinding is purely frictional. Tang et al. [69] divided the chip formation energy used in model calculations into static energy and dynamic energy, which is mainly influenced by shear strain, shear strain rate, and heat in the removal process. This study was performed on metal, so cannot be directly correlated to concrete. 2.6.2 Physics Model In order to determine the position of the grinding cutting edges with respect to a global coordinate system, the robot position must be known. On a perfectly flat 3D surface, the robot position is easy to determine from the robot’s speed and orientation. This can lead to changes in x and y position and thus provides the new position of the robot. However, it is not always possible to operate on a perfectly flat 3D surface, and on an uneven surface the robot’s position must be defined in terms of x, y, and z , as well as roll, pitch, and yaw. This is difficult to accurately determine for a grinding robot due to the changing points of contact. 39 2.6.3 Surface Dynamics As a robot moves along a surface of the floor, the robot will move with 6 Degrees of Freedom (DOF). This is x, y, and z, and roll, pitch, and yaw. This is important for grinding because, unlike a CNC or milling machine, the 6 DOF of the cutting tool cannot be perfectly con- trolled. For a mobile robot grinding application, the cutting tool (diamond tool) will move with the surface, and therefore the diamond tool cutting position and orientation will be determined partly by the surface dynamics. The mobile robot dynamics along the surface can be captured in two ways: using a dynamic model of the surface, or through applying a physics engine to the model. 2.6.4 Model Validation Often, a model simulation on its own is not sufficient; the model must be tested against real-world results to validate any assumptions and equations used. A common method of validation is to run a similar real-world test and find the error of the model compared with the real-world measurements [59, 54]. Li et al. [50] used confocal topography to confirm surface simulations, which provided an accurate method for analysing the scratches in the workpiece due to grinding. Guo et al. [39] compared the developed model predictions to experimental results using a stylus to measure the surface roughness. 2.7 Opportunities for Improvement This review has revealed a number of opportunities to improve upon the existing methods and approaches and utilise known information for this particular application. The design of an automated concrete grinding machine requires several areas of development: the au- tonomous system, capture of the floor profile, and use of the known floor profile for optimised grind control. There are a number of findings in the literature that can be applied directly. ROS is a commonly used research platform due to its open-source nature and modular, rapid development structure. In addition, floor profiles for a number of different applications have been previously captured with some degree of success. Finally, models of the grinding pro- cess have been proposed, particularly from a grinding wheel perspective, and a number of the key concepts can be considered in the development of a floor surface grind model. 40 Common sensors used for creating accurate floor surface representations are expensive 3D terrestrial laser scanners, and thus are unfeasible for a low-cost application. There is therefore an advantage in identifying a fast and cheap method of scanning the floor, which can then be used as prior knowledge for a second task, such as automated grinding. Other applications could use this same methodology, such as cleaning and navigation of uneven terrain. In addition, identifying the limitations of some sensors for this type of application can benefit the research field. Further advantage lies in using fundamental concepts to devise a model representing how a grinding machine removes material on a concrete floor based on the known floor profile. Little research has been performed in this area, although some concepts and approaches from related areas may be used. An opportunity exists to develop a basic grinding model that considers the floor profile as an input, along with traditional factors such as grinding head speed and machine cut speed, to predict the post-grind surface of the floor. 2.8 Chapter Summary This literature review has extensively covered current research in the areas that are rele- vant to this project. Analysis of a number of robotic platforms has helped confirm design decisions for the research platform. Research platforms with differential drive provide the ad- vantage of utilising wheel odometry to assist with localisation. In addition, the open-source Robotic Operating System (ROS) can provide a fast and modular framework to develop from. Discussion of a number of 3D scanning technologies has illustrated the variety of sensors available and their limitations, which can help aid decision making regarding sensor selection. Researchers have used a number of different methodologies to create a 3D map of an environment. These methodologies fall into the categories of stationary, Stop and Go, and continuous scanning methods, with the further option of sensor fusion. Continuous sensing can provide an efficient method for mapping a large area, such as a floor to be ground, and errors can be minimised through the use of sensor fusion. Following 3D data collection, the information must be processed and represented in a useful way. The state of the art of floor flatness analysis pivots on the application of a Continuous Wavelet Transform to the floor, providing insight in two dimensions across a number of wavelengths. Extensive research 41 into the grinding process has highlighted the complexity of the process due to a number of stochastic variables; however, assumptions and predictive approaches can help overcome some of these complexities. A number of opportunities for further research have been identified. This project will consider the floor surface capture process and sensor limitations, and research into a basic grinding model that acts on the captured floor surface. 42 Chapter 3 Floor Surface Capture Platform 3.1 Chapter Overview This chapter describes the development of a robotic research platform for testing the floor surface scanning and modelling capability. The research platform is designed to be of similar size and shape to a grinding machine to give insight into system dynamics. The development of the platform is discussed in terms of the mechanical, electrical, and software systems used. The algorithm for capturing the floor profile and for planning the coverage path is discussed. Initial tests of the floor surface capture system provides insight into further development and challenges. A number of the challenges are addressed and the methodology for testing as well as improving the system is analysed. 3.2 Research Platform Development The floor surface capture system aims to utilise cheap and accessible sensors. This can be achieved through using one sensor to localise the robot and a second sensor to capture floor surface data. The sensor can be moved through the environment and the resulting captured data stitched into a 3D profile. In order to move a sensor through the environment to capture the required data, a moving platform is required. This platform must be relatively robust, and provide adequate support for the sensors to be mounted on it. Continuous scanning of an environment also requires accurate localisation of the robot. The robotic platform must therefore be capable of localising whilst capturing the floor surface profile data. 43 3.2.1 System Requirements The research platform is designed to be a replica of the target grinding machine for im- plementation. Accordingly, the system does not carry excessive weight from the grinding motors whilst still providing insights into the dynamics of the size of the system. This allows the robotic platform to be easily portable and enable fast development of control, as well as efficient testing of systems such as floor scanning. 3.2.2 Mechanical System The robotic platform (Figure 3-2) is a differential drive robot with two drive motors at the rear of the machine. The front of the platform is supported by a castor wheel. The robot has two levels: one providing a base for a horizontal laser scanner for SLAM, and the other holding components for control and communication and providing a mount for a second sensor to scan the floor. Structurally, the system is of similar dimensions to a typical grinding machine. An adjustable sensor mount (Figure 3-1) was developed to provide the means for mea- suring a floor profile at various angles and with different sensors. The mount was designed to be relatively universal and sufficiently strong to hold a variety of floor scanners in position as the robot moves around the room. The mount was designed to hold components weighing at least 6 kgs and could easily be adjusted 180-degrees in pitch and then locked in place. It was made out of 3 mm steel bent into shape. Figure 3-1 shows the adjustable mount with an Intel RealSense RGB-D camera on the underside and an IMU on the top. The camera is mounted on a second adjustable frame that can be manually tuned to ensure that the camera is level. The motors are mounted directly into the supports of the frame using 4x M5 bolts. This mounting is sufficient for the test platform, but it would need to be strengthened for the final machine due to additional loading from the weight of the machine and grinding motors. 3.2.3 Electrical System The electrical system consists of power distribution and communication connections. A power distribution board was designed to provide power to each component, protected by fuses and controlled through relays. The system power architecture is shown in Figure 3-3. 44 Figure 3-1: Floor sensor adjustable mount The main power demands are 24V, 12V, and 5V. The 24V system is limited to a maximum of 50 A from the battery and is protected by fuses. Each 12V and 5V component has a fuse to restrict current , ranging from 0.5 A fuses to 5A fuses. Additional ports were supplied for future expansion. A schematic of the power distribution board was designed using Circuit Studio. The board takes in 24V and provides 12V via a 24V to 12V converter. The 12V is then converted to 5V to power a microcontroller board (Arduino Uno) for relay control. There are 6 relays on the distribution board, 4 x 24V and 2 x 12V, which can be turned on and off from the Arduino. Each output port is protected by a fuse to help keep components safe. For a commercialised product, more-reliable control would be desirable; this could be achieved by a dedicated USB-controlled relay board or a PLC relay board. This component can easily be swapped out to achieve the required functionality at a later time. 3.2.4 Software System The robot uses the Robotic Operating System (ROS) framework for internal communication and control [58]. ROS is an open-source system that allows for the creation of many nodes that can communicate efficiently through the use of topics and services. The ROS system begins with a core, which provides the base communication framework. Nodes can be added to the system, which can communicate through the roscore using top- 45 Figure 3-2: Diagram of the mobile robot platform highlighting the laser scanner position and orientation ics and services. Any node can publish or subscribe to any topic or service, providing a highly modular system. Due to the open-source nature of ROS, a community has provided a number of existing solutions to common problems, such as Adaptive Monte Carlo Locali- sation (AMCL), Gmapping, and SLAM. This results in an efficient and proven framework. ROS was selected as the software framework because of its open-source nature, the modu- larity provided by nodes, and the ability to accelerate development using existing solutions. The ROS system for the research platform requires a number of components; the general architecture is shown in Figure 3-4. 3.2.5 Floor Profile Creation A ROS integrated package was developed to capture laser scans of a floor profile and assemble these into a point cloud that could then be analysed and used as prior knowledge. First, the raw laser scan data is filtered so that only the floor in front of the robot, spanning a 60-degree angle, is captured (Figure 3-2). These laser scans are transformed through the robot relative to the robot’s base_link in the global space. As the robot moves through 46 Figure 3-3: Electrical power and data connection block diagram Figure 3-4: ROS system architecture the environment, the base_link transform moves through the global coordinate system. This in turn moves the location of the laser scan and thus the laser scan data. Each laser scan provides a line scan of the floor profile at a point in the 3D global coordinate system. Assembling many of these single scan lines together therefore forms a series of lines and thus a surface of the floor profile. The assembled scans are captured by the laser_assembler package. A keyboard-controlled node calls the laser_assembler services to start and stop collecting data. Once the laser_assembler service is called to stop assembling, a point cloud of the assembled scans is published to the assembled_floor_scan topic. This topic can then 47 be saved to a .pcd file for analysis. This is performed in real-time; however, the point cloud can only be viewed and analysed once the full scan process has been completed. Alternative profile creation algorithms will be required for different scanning methods, such as RGB-D; and this will be discussed in Section 3.6.5. 3.2.6 Path Planning Program A program designed to automatically create a coverage path for a set area on a map was created using OpenCV [17] and ROS integration. The program can operate in two modes. Each mode takes in a ROS map description consisting of a .yaml configuration file and a .pmg image of the map. The first mode allows the user to input desired points for the robot to move to. These are displayed for the user on the map and the robot can then move to each point in the order in which they were placed (Figure 3-5). A point is added by left-clicking on the map; a previous point can be removed using the middle mouse button. Points can then be saved to a file by right-clicking. The points are saved to a text file relative to the global coordinate system of the map (robot-defined). The second program mode allows the user to define an area on the map, and then automatically creates a path to cover this area. The program was developed with the intended application in mind, in that the machine drives in straight lines at a set, but variable, distance apart. This creates the desired degree of grind overlap for the grinding process. Accordingly, the angle of the robot’s parallel paths, and the separation between each pass, can be set by the user. The coverage path is achieved through a series of parallel lines and intersections. A large number of parallel lines are drawn on the map, starting at the desired offset from the first position, at the user-defined angle, and at the user-defined spacing. The program then calculates the intersection of each of these lines, starting with the boundary lines defined by the user. Each point is stored in a list of the points required to cover the area. This method works well, however, does require error checking and correction. For example, the order of points is not considered by the initial program although this is crucial for successful coverage path generation. Accordingly, a few steps were taken to consider the order of points. ∙ The first point of the path is chosen based on the closest point to the first point clicked on the map by the user. 48 ∙ Each line created by the path generation is compared with the user-defined path angles. ∙ If a line is on a diagonal (beyond some pre-determined threshold), then the points must not be in the correct order and are therefore swapped. A later version of this program was implemented in RQT. RQT provides an easy interface with the ROS system and can provide a platform with many different plugins running at a time, including diagnostics and data plotting. This may be useful in later development where many different aspects of the robot may need to be monitored at one time. A selection of paths created by the coverage generation program are shown in Figures 3-5 and 3-6. The interface allows the user to vary the spacing, angle, and offset of the generated path, and the path updates in real time to allow for fine control. Figure 3-5: Created coverage path with vertical machine direction and small grind overlap The goal positions created by either the user or the program are saved to a text file that can then be read by the waypoint management node for controlling the robot. Each waypoint is saved as a global x, y, z coordinate, as well as yaw for the orientation of the robot. For the 2D robot, the z coordinate is arbitrary. The waypoints generated from the program are required to be converted from the image coordinate frame into the robot’s coordinate frame. The robot’s coordinate frame is determined when the robot is creating the initial map. 0,0 on the robot’s frame is taken to be the lower left pixel in the map [34], whereas 0,0 on the image coordinate frame is the top left-hand corner. In addition, yaw is considered to be 49 Figure 3-6: Created coverage path with horizontalmachine direction and large grind overlap counter-clockwise rotation. Therefore, each coordinate needs to be remapped and converted from pixels to metres. When the position is sent to the robot, the robot knows both the desired global position and its current position, so it can accurately plan a path from its current position to the goal. The orientation is determined such that the robot achieves its goal in x, y and then rotates in position to face the next goal. 3.3 Hardware Selection 3.3.1 Localisation Sensor A sensor must be used to help aid localisation to overcome the inherent accumulation of errors from wheel odometry. A number of different sensor technologies can be used, each offering different advantages and disadvantages. A selection of these technologies has been described in the literature review. Due to its affordability yet relatively good accuracy, range and resolution, a SICK LMS291 was selected for localisation of the robot. This scanner provides a 2D laser scan of the environment, and can produce 50 mm accuracy up to 80 m or 35 mm accuracy up to 8 m [3]. The laser scanner can easily be integrated into the ROS framework with an existing ROS package. The sensor data can be used by Gmapping [35, 36] to create a 2D map of the environment or by AMCL [71] to localise the robot in an already created map. AMCL uses a probabilistic approach to match the laser scans to likely positions in the map. 50 3.3.2 Floor Sensor A sensor is required to capture the floor surface profile information. A number of technologies can be used for this task. Selected technologies are summarised in Table 3.1. A 2D laser scanner was selected for initial floor scanning, this is once again due to both accessibility and affordability. While 3D laser scanners have been used to perform accurate sensing of an environment, including the floor, they are very expensive and thus make them not feasible for some applications. The 2D laser scanner used for initial testing was a SICK LMS291 laser scanner, which is relatively cheap at around US$6000. The SICK LMS291 has an aperture range of 180-degrees, with an angular resolution of 0.25-degrees. At a range of up to 80 m the accuracy is ± 50 mm, reducing to ± 35 mm at a range of up to 8 m [3]. Additionally, a Hokuyo URG 2D laser scanner was used for further testing, due to its short-range design. The Hokuyo laser scanner has a detectable range of 20 mm to 5600 mm, with a field of view of 240-degrees at a resoltuion of 0.36-degrees [63]. However, despite having been designed for short-range use, the accuracy of this laser scanner is only ± 30 mm. The cost of the Hokuyo laser scanner is around US$1080, substantially less than the other sensors. An RGB-D camera was selected as a secondary sensor for testing and comparison. The RGB-D camera used was a D435 Intel RealSense camera, which uses active IR stereo to produce a depth image alongside the RGB data from a 2 MP camera. [45]. Figure 3-7: NextEngine scan of brick [31] 51 Table 3.1: Sensor Specifications Sensor Range Accuracy Resolution Price SICK LMS291 8m or up to 80m ± 35 mm and 50mm 0.25 degrees US$6000 Hokuyo URG 20mm to 5600mm ± 30 mm 0.36 degrees US$1080 Intel D435 10m not stated 640 x 480 pixels US$180 NextEngine 3D 200mm ± 0.30 mm 3.50 US$2995 Optical Interferometry can also provide detailed scans of a surface; however, often have long scan times or requires a textured surface for good performance. For example, a multi- laser-based scanner, the NextEngine 360 [31], performs very well with masonry (Figure 3-7). However, the sensor can take up to 2 minutes to perform a scan, rendering it unsuitable for this application. This sensor provides incredible accuracy, with up to ± 100 microns for a macro model and up to ± 300 micron for models with a wider field of view. 3.4 Initial Testing The platform’s ability to capture a floor surface profile was initially tested using two 2D laser scanners. One laser scanner (mounted vertically) was used to capture the floor profile, while the other laser scanner (mounted horizontally) was used to localise the robot within the environment. The initial testing methodology and results were presented in the Conference Paper ‘Floor Surface Scanning using a Mobile Robot and Laser Scanner’ (Appendix A) [77]. 3.4.1 Experiment Methodology The scanning experiments were set in an area of 2 x 2 𝑚2 marked out with black electrical tape (Figures 3-8a, 3-8b, and 3-8c). This tape has low reflectivity and high absorbency, resulting in a poor laser scan measurement that helps to identify the boundaries of the scanned area in the final assembled point cloud. The robot was positioned outside the lower left-hand corner of the square and then followed a coverage path (Figure 3-9). This path provided sufficient space for the robot to perform a turn and record scans of the surface. The robot moved at a relatively slow velocity of 0.1 𝑚𝑠−1. At the beginning of each test, all laser scans were recorded in a ROS bag file for later analysis if required, and the real-time laser 52 (a) Carpeted floor (b) Asphalt floor (c) Coated asphalt floor Figure 3-8: Test surfaces used for mapping scan to point cloud conversion begun. This point cloud creation process involved capturing every laser scan and associated transform and placing them in a 3D coordinate system. The assembly of laser scans was then converted to a single point cloud of the floor, which was then saved as a .pcd file for analysis. In each test it took around six minutes to complete the coverage path. The robotic platform was used to map three different surfaces: carpet flooring (Figure 3-8a), outdoor asphalt pavement (Figure 3-8b), and a coated asphalt floor (workshop floor) (Figure 3-8c). These surfaces were chosen to provide a representative sample of various types of indoor flooring. The test surfaces were expected to give insight into how well the laser scanning could identify areas of interest for the different surfaces. Each surface was mapped three times and cross-analysed to determine accuracy. A contour plot from the resulting point cloud was created and used to identify high and low areas of the floor. 53 Figure 3-9: Coverage path for scan area 3.4.2 Measurement Methods The captured point cloud of each floor surface was saved as a .pcd file. MATLAB was used for processing, which involved clipping the scanned area to the target size of 2 m x 2 m. The ‘black tape’ outliers were removed by applying a threshold to the point cloud data set, and a Gaussian 5 x 5 filter was then applied to the data to smooth the resulting surface and reduce noise. The point cloud was then presented as a contour plot, indicating high and low areas throughout the 2 m x 2 m area. The surfaces were inspected by touch and visually for any deviations in flatness at key areas. These areas were noted and compared to the resulting point cloud and contour plot. 3.4.3 Initial Floor Capture Results The mobile robot system was able to successfully locate itself and use this information to create a surface profile of each floor. The odometry information provided sufficient pose and orientation estimation to capture the general surface profile of each floor. Odometry errors were observed consistently in all tests. Additionally, systematic errors from the laser scanner were observed in all tests, illustrated by a continuous low measurement near the centre of the scan. 54 3.4.4 Carpeted Floor The carpeted floor was successfully mapped despite being anticipated to be a difficult surface for consistent performance due to the fibre orientations of the carpet. The laser scan provided a reasonably thick surface measurement of 0.1 m. The carpet was difficult to inspect visually and appeared to be relatively flat. The contour plot (Figure 3-10c) shows a relatively flat area (with the systematic centre scan error) and a slight high area towards the bottom of the target area. 3.4.5 Workshop Floor The workshop floor is an example of an indoor surface covered with dust, cracks and pits. This type of surface could be hard to map; however, from the results it is clear that the scanning system could successfully create a consistent point cloud of the coated asphalt (workshop) floor. Despite the consistent low centre measurement due to a systematic error, a high area was identified by the surface scanning system at the middle right of the target area (Figure 3-11c). This area was confirmed by visual inspection as a large step change in the floor. There are two-coin sized dents (30 mm diameter) in the floor of the workshop that the mapping system was unable to detect. However, the system did detect a general slope along the y axis. 3.4.6 Asphalt Asphalt is another example of a difficult surface to map, as the colour and texture may vary due to weathering and wear and tear. This surface was also successfully mapped, demonstrating the strength of the developed system. Even though the entire surface was on a gradual slope, and no IMU data was available, the surface profile suggested a high point to the right and a low point to the left of the start position. A ridge in the surface was detected similarly to the coated asphalt floor, but upon inspection this high region was due to a rougher area of asphalt. The contour plot (Figure 3-12c) illustrates the general slope of the surface, with some deviations of the slope due to surface roughness and the systematic errors. 55 (a) Carpet floor raw data 0 3.5-0.5 Y (m) 3-1 X (m) 2.5 -1.5 2 1.5 (b) Clipped carpet floor scan showing target area 1.5 2 2.5 3 3.5 X (m) -1.5 -1 -0.5 0 Y ( m ) -0.02 -0.02 -0.01 -0.01 -0.00 (c) Contour plot of carpet floor Figure 3-10: Results for carpeted surface 3.5 Initial Challenges Although initial testing did prove to be successful, there are a number of improvements that can be made and challenges that can be overcome. The initial development and testing identified challenges that include: localisation, sensor accuracy, and 2D limitations. These challenges are discussed in the following sections. 56 (a) Coated asphalt floor raw data 1.5 1 0 Y (m) 0.5 -0.5 X (m) 0 -1 -1.5-0.5 (b) Clipped coated asphalt floor scan showing tar- get area -1.5 -1 -0.5 0 X (m) -0.5 0 0.5 1 1.5 Y ( m ) -0.03 -0.02 -0.02 -0.01 -0.01 (c) Contour plot of coated asphalt floor Figure 3-11: Results for coated asphalt surface 3.5.1 Localisation A particular challenge for the robot platform was accurate localisation. Based on research conducted by Thrun et al. [71], the robot can use the horizontal laser scan for localisation through the Adaptive Monte Carlo Localisation (AMCL) ROS node. This node provides a laser scan matching and probabilistic approach for localising the robot from the 2D laser scans. The probability of the robot position in a number of locations is calculated from the combined laser scan matching and wheel odometry information. The position with the highest probability is updated as the robot’s current position in the map. This method 57 (a) Asphalt floor raw data 0 -0.5 3 Y (m) 2.5-1 X (m) 2-1.5 1.5 -2 (b) Clipped asphalt floor scan showing target area 1.5 2 2.5 3 X (m) -2 -1.5 -1 -0.5 0 Y ( m ) -0.00 0.00 0.01 0.01 0.02 (c) Contour plot of asphalt floor Figure 3-12: Results for asphalt surface works well, although due to odometry errors and drift, the robot will jump to the calculated ‘correct’ position every time the AMCL node updates. These jumps are small and manage- able in some applications, but for this particular application they are not desirable as this will result in a shift in the fl