If you’re a robotics engineer or merely an enthusiast in the field, you may have come across the term AMR. To elaborate, AMR stands for Autonomous Mobile Robots. The key functionality of these robots is environmental interaction in the absence of a human operator. One of the most frequent requirements in autonomous mobile robot software development is the necessity to test the localization, planning and control algorithms in a real-world environment such as a factory floor, hospital or a warehouse. However, it is generally recommended to test the algorithms on simulation before porting them over to robot hardware — it results in rapid testing and weeds out several bugs. It then becomes paramount to have a realistic simulation environment upon which the development can be tested.
Maps for navigation
To perceive the environment in proximity to it and for dimensional analysis of its surroundings, AMRs generate two/three-dimensional maps called “Occupancy Grid Maps” using its onboard sensors. These sensors may include either using a Depth Camera or a LiDAR. This process of saving the contours in its surrounding vicinity is known as “Mapping”. Typically for indoor navigation applications, a 2-D occupancy map is sufficient to capture most of the relevant information about the surrounding. The map generated is a .png file with black pixels indicating the presence of an obstacle, and the white pixels, its absence.
Such a map typically accomplishes two primary objectives -
- A reference for localization
- Costmap for planning and control modules
As seen from the above image, the 2D Map generated only saves outlines or contours of the models in the vicinity, as these are the points which are recorded in a laser scan reflection. Most of the popular ROS packages for mapping include RTABMap and gmapping. These packages are purely open-source and are readily available online.
These generated maps help the robot to localise itself with its field of interest. Algorithms such as Adaptive Monte-Carlo Localization (AMCL), predict the location of the robot in the generated map using sensor readings from the LiDAR and probabilistic methods. Therefore, since AMCL depends on the physical LiDAR readings to localise, we need the simulatory environment to generate such readings in accordance with the map we are using.
Simulators such as Gazebo provide users with the ability to generate 3D worlds, but the building blocks of such worlds are almost too perfect geometrically to represent the complexity of the real world. As an example, in the earlier image, the world comprises perfect straight lines, circles and polygons. Even if the complexity of real worlds is to be accounted for, a lot of time has to be invested in recreating realistic scenarios of factory floors. On the other hand, 2D maps of several real-world data sets are easily available or can be obtained via a one-time mapping process.
Some readers may wonder what difference it makes if the world just consists of simple shapes rather than capture full complexities. One might argue that it’s possible to replicate a field test terrain in Gazebo using simple shapes and figures. This, however, reduces the sophistication of testing grounds, as this severely downgrades the irregularities and nuances of each monolithic surface. A key thing to note here is that algorithms for localization, planning and control often give satisfactory results in ideal cases but provide several pain points when faced with a real-world situation. Therefore, the more “real” version of the environment we can operate in, the better prepared the application will be.
What if these map images were to be rendered as physical 3D models?
A significant result of this will be a reduced dependency on-the-move hardware and improve preliminary software testing. instead of repeatedly relying on the testing grounds for examination, a single mapping of the field would prove to be sufficient.
Moreover, this would lead to more rigorous preliminary simulation training before deploying the AMR to the actual field.
Usually, these robotics simulations are done in platforms such as V-REP, Gazebo, RobotStudio, etc. Here, we will be tackling this problem in Gazebo, as it’s the most popular and widely-used ROS simulatory platform.
Essentially, the way to proceed is to deploy DEMs in Gazebo. In case you haven’t heard what DEMs are, they stand for Digital Elevation Models. These models have an absence of physical structures such as buildings and vegetation; however, these are not in the interest of the simulation in mind. These cater to terrain simulation generation, and the data required to generate this is given by sensors such as LiDAR and depth cameras. The sensor stream gets processed by popular mapping ROS software packages such as gmapping and outputs a 2D grayscale image.
Intuitively how this works is that Gazebo attaches a gradient to the grayscale map image provided, and this translates to the peaks of each pixel specified. In other words, how bright the pixel is, the higher the terrain is generated in Gazebo world simulation.
The GitHub repository linked here has some pretty straightforward instructions for this execution. Feel free to raise Issues and contact us if any problem arises.
The height and width of the field generated are customisable and are fed as parametric arguments to the script. Your map image may generate jagged and uneven terrain, and don’t worry as this is all part of the process. Incomplete mapping or the dispersion of LiDAR rays may cause “cones” or “spikes” to appear.
This is due to stray pixels in the image. As these pixels are isolated, the simulator interpolates these pixels to cones/spikes. However, the height of these cones can be easily controlled by the parametric methods specified. The codebase tends to replicate the tangible circumstances of a real-world scenario, mimicking the rough and irregular characteristics of the field area.
While heightmaps still do not replicate the real world perfectly, it still does a good job of capturing the imperfections while saving bundles of effort involved in creating a Gazebo world by inserting individual object models.