Week #5 (26-03) | Point cloud accumulation for road reconstruction

Following the discussion with the thesis advisor, prof. Vítor Santos on Monday, it was agreed that the next step in development was the accumulation of the point-cloud data gathered from the LIDAR laser scans, and its representation relative to the “world” frame.

Following this guideline a new package was developed “road_reconstruction”, where the data from the LIDAR sensor is gathered and converted into a point-cloud which is then processed and accumulated over time, to create a representation of the perceived environment.

In order for the point-cloud accumulation to occur the vehicle needs to be moving relatively to a fixed frame (“world”)  so that the gathered data doesn’t “move” with the car. Following that, a simplification was considered for the time being, the car was given a straight line movement based on the velocity data received from the gps. With the car now moving the point-cloud accumulation node was developed and the result was as follows:


The image above represents a reconstruction using only the bottom laser scan form the LIDAR. This reconstructions uses a buffer of 400 points, which is about 40-50 consecutive scans. But as we can see there are two major problems,.Firstly despite this being data from the lowest  laser scan of the sensor, much of the road is not mapped, which suggests that the sensor is pointing to high for this application and will need to be lowered.  Secondly there are a lot of points that are to far away from the vehicle, and so are of  no interest in this context. To solve that a filter was developed to remove said points. The filter calculates the distance from each point of the cloud to the centre of the car, compares it to a defined distance and eliminates it if its greater. The resulting cloud is a lot cleaner than the previous:Screenshot_20180330_225958

But there are still some points that are not being filtered, most likely because the filter hasn’t finished its processing by the time a new point-cloud arrives, so the code will need to be optimized to resolve this issue.

Este slideshow necessita de JavaScript.

With the point-cloud accumulation nearly finished, during next week, in addition to solving the issues mentioned above, the focus will be primarily on the positioning of the vehicle relative to the fixed frame, so that the reconstruction correctly represents the topology of the road (turns, straight roads, roundabouts etc. ).

Week #3-4 (12-03 & 19-03) | Understanding developed code, thesis roadmap presentation and first development steps.

The third week of work was primarily spent on understanding the code previously created by former students that worked on the ATLASCAR project, mainly:

  • Visual perception unit, created by Diogo Correia as part of his masters thesis, more specifically the code that interfaces with the sensors, their calibration and the positioning of their respective coordinate frames.
  • Inclination module, developed by Armindo Silva, to understand how it calculates the vehicle orientation components (roll and pitch).
  • Dynamic map reconstruction, created by Pedro Salvado in his masters thesis, was the most thorough analysis, to understand and replicate how the local map reconstruction was achieved.

In this week there was also a small presentation for the LAR members and advisers in which we presented a crude tasks roadmap for the masters thesis followed by a discussion with the advisers.

In the fourth week began the code development.

The arduino code created by Armindo Silva was rebuilt to be used as a ROS node that uses RS232 communication to publish the orientation calculated from the sensor data.

In this first approach it was considered that the road is always flat. This simplification allows to ignore the spacial positioning of the vehicle in relation to the world frame, since in order to calculate it IMU and odometry data is necessary and is not available at the moment. So the efforts where focused on the relation between the car and the road, with the possibility of defining the relation between the road and the world in the future.

To do that two more coordinate frames where added to the car “robot” model, to represent the ground (static) and the centre car axis (transformation matrix calculated with the data sent from the orientation module).

Following that all the sensor frames where also changed to depend on the centre car axis (which moves according to the rotation of the vehicle), so that the measurements depend on a moving frame.

A ROS bag with the orientation information was created to test the system and the result was quite good.


View of the car and sensor frames (with static STL model)

View of the car and sensor frames (without STL model)


As we can see all the sensor frames move accordingly to the centre car axis.

View of the car and all sensor frames motion (static STL model)

Lastly it was also acquired a pointcloud dataset from the Sick LD-MRS sensor in order to verify its movement in relation to the movement of the car, but there was a small problem with the program that lead to some unexpected frame behaviour caused by the use of Rosbag, that needs to be corrected.

LIDAR sensor readings and representation in reference to the respective moving frame (/ld-mrs)

It was also created a GitHub repository where the developed code will regularly be updated. https://github.com/TiagoSMarques/Masters-Thesis_ATLASCAR2



Week #1-2 (26-02 & 5-03) | ROS Workshop, project research and preliminary report writing and discussion.

The last couple of weeks where spent mostly on the research and writing for the preliminary report. The research was focused on information gathering about the state of the ATLASCAR2 project, and the hardware and software solutions that are already implemented, specially focusing  on the hardware to be used, the Sick LD-MRS, as well as studying the problem for the thesis and the work that needs to be done to solve said problem. Naturally there were some oversights and a little confusion in relation to the work that was intended and the main guidelines of the project, but after the report discussion with my adviser, professor Vítor Santos , those misconceptions were corrected and complemented and  my understanding of the overall task in now much clearer.

This week there was also a workshop for the students that work at LAR, on ROS (Robot Operating System) given by professor Miguel Oliveira, in which we acquired the basic knowledge to work with the ROS environment, as well as some the tools this system provides that will be very useful for us to use in our projects.

The throughout the workshop all the members developed players for a game in the ROS environment as a way of learning and how this system works and its capabilities. Here’s a video showcasing the final game that was played, in which all the moving arrows represent a player and the objective is to hunt a team without being killed by the other,  the scoring is determined by the diference of kills and deaths for each player in a team.

In the end we gathered a good knowledge basis for ROS and its tools. Features like, driver packages for the main sensors that are to be used, the ease of data flux between nodes of the program, tools to easily deal with several data types like point clouds, laser scans, and others. This solutions that are already implemented  allow us to better focus on solving the problems and less on the hardware and the reading of its data, and is the main reason for the use for this environment.

Lastly by the end of week #2 my colleague Ricardo Silva and I, with the valuable help of another colleague Nuno Silva, were able to use the previous work done by Diogo Correia, and communicate and retrieve data from the sensors in ATLASCAR2 and visualizing them in Rviz tool, from the ROS system.

In the following week(s) I will be trying to focus my efforts on the first step of the project which is the creation of a ROS node for the integration of the vehicle rotation measurement module, as well as continuing to learn ROS syntax and taking a online C++ course from “http://www.learncpp.com/cpp-tutorial” that will be useful along the project.