Week #13 (28-05) | Results chapter and experiments to evaluated the developed solution

Despite a set back caused by the fact that last Friday my personal computer stopped working, this week was mostly focussed on writing the results chapter of the masters dissertation and performing some experiments to gather result regarding the developed systems.

These experiments include the study of the accuracy with which the roll and pitch are translated in representation of the laser scans. For this an hydraulic jack was placed on the rear of the atlas car, allowing to slowly increase the car’s pitch, and the car was placed in front of a flat surface, from this experiment the pitch and roll where gathered along with the variable height of a laser scan point that intersected the plane, the setup can be seen in the figure bellow along with a graph detailing the variation and a short video to help visualize the results.




Other tests have been made regarding the accuracy for positioning of the atlas car, using the mapviz application to visualize a circuit travelled by the atlas car. Regarding the road reconstruction module some results have also been gathered and evaluated, as well as some troubleshooting and subsequent corrections to the code which was not correctly representing the points on the ground plane since it lacked the parameter for the orientation of the lidar sensor.

That said as of this moment only half of the results chapter as been written, only missing the results and analysis of the experiments performed in the simulator and the actual real-world tests of the algorithm.

Week #12-13 (14-05 & 21-05) | Writing the dissertation and testing the accuracy of the algorithm.

These past two weeks were almost completely spent writing the Chapters 2, 3 and 4 of the dissertation.

Despite this in this week, with the help of the colleague Pedro B. Nova, responsible for the global navigation of the AtlasCar 2, the road edge detection package was integrated with the mapviz application used by Pedro. The processe involve a slight tweak to the frame tree so as to receive the position from the node published by global navigation node.

The integration allows to overlap the road edge point cloud and the satellite map provided by the map viz application, and provides a way do qualitatively evaluate the accuracy of the boundary detection method. The images bellow show some of the obtained results.

Este slideshow necessita de JavaScript.


From these results we can see that the detection is pretty accurate and the resulting point cloud almost always overlaps the borders represented on the image. It even detects the two platforms on the same side (to the right), near Complexo Pedagógico.

There are still some outlier points that need to be removed but overall the quality of the road limits detection is very good.



Week #10-11 (30-04 & 07-05) | First chapter of the dissertation and simulation results.

Week 9 was primarily spent on rebuilding the simulator, correcting some mistakes previously made, in order to more accurately measure density according to the method previously discussed. Following this an analysis of the case study was performed considering that the sensor was static relative to the road. Several configurations of side-walk were used as well as several sensor configurations, in term of its height and angle, though as was expected the results of little consequence given that this method heavily relies on accumulated data.

Also in this week a “skeleton” of the thesis was written and presented to the thesis advisor, so that the dissertation structure could be discussed.

Week 10 was mostly focussed on writing the first chapter of the master’s dissertation and spending the time to research the state of the art for road boundary detection.

In regards to the practical aspect of the thesis, the simulator was adapted to be able to quantify the measured density in the case of a moving vehicle and some results were obtained.

The case study involved several configurations of vehicle velocity, sensor placement, curb position and angle, and radius of the filtering circle. Some of the results are presented below in the form of a graph. These result where are based on a 12 m displacement of the car with the filtering point at 14m from the initial sensor position, so in these graphs the X represents the displacement of the sensor (vehicle).

Este slideshow necessita de JavaScript.

Since a lot of parameters were analysed here follows some of the conclusions drawn from them:

  • Vehicle velocity:
    • For smaller vehicle velocities the final density is much greater, but more time is required to obtain the same density levels when compared the greater speeds.
  • Filtering parameters:
    • For the tested radii, 0.2 m seems to be the one that produces the most density.
    • The density on the ground plane is much less than on the vertical planes, and the radius of the filtering circle doesn’t have any major influence on the density results.
  • Side-walk configurations:
    • For the various side-walk configurations (angle to the ground plane \____/ ) despite the similar curve behaviour, the left side of the road (farthest from the sensor) tends to accumulate points faster and ends up with a greater density in almost every angle (except those closer to ground level).  Also in these configurations the more perpendicular to the ground the greater the density.
    • For side-walk configurations which rotate inward along Z (  /   .   \ ) the left side produced much greater densities but the behaviour remained the same in both sides, with the ideal angle for the most density being 45º.
  • Sensor placement:
    • The ideal height for most density seems to be between 0.35-0.4 m.
    • The ideal angle for the sensor appears to be between 0.6º and 1.2º.

Despite the limited cases studied (most of them with the accumulation area in the same position) these graphs show results coherent with what was to be expected, and in some cases such as the sensor placement show that the configuration in the AtlasCar2 seems to be the ideal for the most density, which is really interesting.


Week #9 (23-04) | Mathematical model to determine the parameters for optimal point cloud density.

Following the weekly discussion with the thesis advisor and in the context of the work developed in the previous weeks, it was suggested that a mathematical model of the LIDAR laser scans should be created. This model would incorporate several parameters needed to correctly represent the problem at hand, such as height of the sensor, angles between each beam of a scan and between scans, etc. Next a filter mechanism would be applied to the model with the intent of mathematically measuring the point cloud density in a given area. The chosen approach was to create a function which has a point (xp,yp,zp) , a normal vector (a,b,c), a radius r  and the point cloud array(x,y,z) as its inputs and that returns the number of points that are inside a sphere centred in (xp,yp,zp) with radius and belong to the plane defined by the (a,b,c) normal.

Screenshot_20180426_212300The figures below represent the model of the laser scan and their respective mathematical equations, in which phi represents the angle between beams of the same scan (220 beams with 0.5º between them), alpha represents the angle between scans (4 scans with 0.8º between them) and h represents the height of the sensor.

Este slideshow necessita de JavaScript.

Next a the limits of the beam where determined and represented by a blue point, and a border walk was also simulated with the collision equations implemented:

Este slideshow necessita de JavaScript.

Now based on these points the filter was applied:

Este slideshow necessita de JavaScript.


This equation system now allows to determined how many points are in the sphere and are coplanar with the plane.

After a brief analysis of the results it was observed that its almost impossible for a point from the point cloud to be coplanar with the plane, so instead of complete coplanarity (dot product between the normal vector and the vector formed by the point == 0), a “close enough” margin was implemented to allow to quantify the coplanar points. Following this the next task is to integrate these equations over time to create a real point cloud from which density data can be gathered.

Also this week began the work of writing the skeleton for the dissertation, and some research was done to see if similar approaches have already been used for road edge detection.



Week #8 (16-04) | ATLASCAR2 processing unit setup and development of a simulator for road topology analysis

Last week’s work proved to be an interesting way of approaching the road curb detection problem, and a such, after the weekly discussion with the thesis advisor, it was agreed that more work should be done towards better understanding and perfecting this approach. Since this study would involve a lot sensor repositioning, varying both its height relative to ground and its pitch, a simulator was built in Gazebo in which the vehicle 3D model was loaded, and a simulated Sick LD-MRS 3D LIDAR was modelled allowing to easily adjust the two degrees of freedom mentioned above. In this model a standard road (width: 6m) with a side-walk (height: 0.15m, width: 1.5m ) on each side was also modelled in order to study the laser scan reflection on the side-walk curb. The images below show some examples of the various gathered scan configurations.

Este slideshow necessita de JavaScript.

The next step was to gather the laser scan data for each case a create a node which converts the laser scan into a point cloud to allow for the analysis of the data. The node is not yet fully operational but this analysis will consists in evaluating the number of points in the vertical and horizontal section of the side-walk and their distance from the car for the various configurations. As a preliminary forecast justified by last week’s results we expect to see a greater number of point alongside the vertical stretch of the curb. The next image shows the direct measurements of the LIDAR for one of the configurations.


Despite this being an image of a single scan and not the the result of an accumulation we can see that the points alongside the vertical side of the side-walk almost form a line that corresponds to the edge of the road and with the accumulation this line will be even more noticeable, if we consider 3D distances the points on these lines will always have more points closer to each other than any others on the image, and this is what allows us to easily distinguish the sides of the road.

A side from the work mentioned above the bulk of this week was spent setting up the central processing unit of the ATLASCAR2. Most of the time was spent on correctly installing the graphics drivers for the hardware and moving the already developed code to the unit, adapting it to work alongside the nodes developed by the other colleagues also working on the ATLASCAR2 project.

Week #7 (09-04) – New method for point-cloud accumulation and data pre-processing.

Following the previous week’s post and the discussion the thesis advisor, it was clear that the priority was to correct the delay in the point-cloud representation described in the last blog post, in order to advance to the next stage of the thesis roadmap.

After a fruitless attempt to correctly identify the problem with the previously implemented accumulation node, a new approach was taken. The node was rebuilt to make use of the ‘laser_assembler’ package present in the ROS environment, this new method works similarly to the old one but with benefit that the aggregated point cloud is directly created from the provided raw laser data, thus eliminating the additional conversion step present in the old node. Similarly to the old one there is a buffer_size parameter that can be used to determine the number of scans to aggregate to the point cloud. This node then publishes the resulting point-cloud in a ROS service to which the node then subscribes to. However the laser_assembler package only takes one laser as an input so two instances of the node have to be running to accommodate for the two laser scans that are to be used, and so an additional step is required to merge the point-clouds from each laser into a single point-cloud. The result is a reconstruction (white) that happens in real-time with almost no delay from the data transmission of the laser scan (pink).

Having the perceived roadway point-cloud is the first step to the road edge detection objective. The next step is to apply some pre-processing to the could, namely some down-sampling techniques, since there are to many points and often many of them are irrelevant and otherwise can seriously hinder processing speed and accuracy further down the line. To remove these extra points some algorithms were tested. Firstly a Voxel Filter was applied to reduce the number of points but still maintaining with relative accuracy the underlying surface geometry, this allowed to reduce the number of points but most of the outlier points were still present.


Pre-processing stage: Voxel Filter

To eliminate these outliers and to focus on the highest density areas a RadiousOutlierRemoval tool was applied to the filtered point cloud. This algorithm searches each point in the cloud for the number of neighbours within a given radius and based on that criteria keeps or discards the point. Since these 3 parameters (Leaf size for the Voxel filter, Radius and Neighbours for the outlier removal) needed to be adjusted to find the optimal values for each, a GUI tool was developed to dynamically change these values as the Rviz visualization was occurring.


After some testing a somewhat clean point-cloud was obtained and it should now almost ready to apply an algorithm to determine the road edge. The images bellow represent the results and some of them are coloured based on height.

Este slideshow necessita de JavaScript.


However, because of the filtering process the road reconstruction happens close to the sensor, about 12 m from it. This rises from the fact that the first laser scan hits the ground at approximately 8m and the second at approximately 20m and so they only overlap in the last 8m from the car. Since the filters favour the denser areas of the cloud the points further away tend to be eliminated, this could present a problem further down the line when this data is to be used. Considering that this occurs because there is too much space in between laser scans, the most obvious solution is to move the laser upper in the vehicle and pointing at more of an incline towards the road, which is a measure that needs to be discussed since it would change a lot of the previously developed work.

Week #6 (02-04) | Vehicle positioning and orientation for road reconstruction

As indicated in last weeks post, the next task was to integrate the positioning of the car relative to the world frame in the road reconstruction process.

To achieve this, the global positioning data given by the GNSS system implemented in the vehicle was gathered and converted from geographical coordinates to Cartesian coordinates to allow for representation and transform calculations between frames. This data appears to be reliable as it already incorporates data from the on-board IMU which is used to correct de GPS measurements, so the positioning is relatively accurate and usable. In addiction to the global XYZ positioning it was also gathered from de IMU the rotation of the vehicle (Yaw), which is an important component to correctly represent the road. A marker was also put into effect in Rviz, to represent the path travelled by the car.

The result of all the steps mentioned above is a pointcloud that correctly represents the geometry of the road and the position of the vehicle in it:


However there is an issue with the accumulation of the laser scans, as we can see in the image below, the scans are being collected at the right time (point-cloud towards the positive side of the frames) but by the time they are aggregated in the road point-cloud the vehicle is already some distance ahead so the representation is not as it should be.


In an attempt to detect the origin of the problem a filter was applied to the laser scan point-cloud (the one in front of the car) to reduce the number of points to be assembled. The result can be seen in the images bellow:


The images represent the reconstruction of the road in the exact same instant in time for the two study cases. The original scan in the left and to the right, the filtered scan which translates in a 72% downgrade to the size of the laser point-cloud. The difference in the representation is noticeable (about 4,5m closer) but not enough to completely solve the issue, so further work will need to be put into the cloud assembling node in order to speed up the scan accumulation.

Next week will be focussed on improving the point-cloud assembling node and preparing the resulting point cloud to extract the road profile from it.

As for the issues mentioned in the previous post, the orientation of the LIDAR was corrected, and a faster and more reliable method for trimming the point cloud was implemented and it now correctly cleans the edges of the point-cloud in a defined bounding box around the vehicle.

Week #5 (26-03) | Point cloud accumulation for road reconstruction

Following the discussion with the thesis advisor, prof. Vítor Santos on Monday, it was agreed that the next step in development was the accumulation of the point-cloud data gathered from the LIDAR laser scans, and its representation relative to the “world” frame.

Following this guideline a new package was developed “road_reconstruction”, where the data from the LIDAR sensor is gathered and converted into a point-cloud which is then processed and accumulated over time, to create a representation of the perceived environment.

In order for the point-cloud accumulation to occur the vehicle needs to be moving relatively to a fixed frame (“world”)  so that the gathered data doesn’t “move” with the car. Following that, a simplification was considered for the time being, the car was given a straight line movement based on the velocity data received from the gps. With the car now moving the point-cloud accumulation node was developed and the result was as follows:


The image above represents a reconstruction using only the bottom laser scan form the LIDAR. This reconstructions uses a buffer of 400 points, which is about 40-50 consecutive scans. But as we can see there are two major problems,.Firstly despite this being data from the lowest  laser scan of the sensor, much of the road is not mapped, which suggests that the sensor is pointing to high for this application and will need to be lowered.  Secondly there are a lot of points that are to far away from the vehicle, and so are of  no interest in this context. To solve that a filter was developed to remove said points. The filter calculates the distance from each point of the cloud to the centre of the car, compares it to a defined distance and eliminates it if its greater. The resulting cloud is a lot cleaner than the previous:Screenshot_20180330_225958

But there are still some points that are not being filtered, most likely because the filter hasn’t finished its processing by the time a new point-cloud arrives, so the code will need to be optimized to resolve this issue.

Este slideshow necessita de JavaScript.

With the point-cloud accumulation nearly finished, during next week, in addition to solving the issues mentioned above, the focus will be primarily on the positioning of the vehicle relative to the fixed frame, so that the reconstruction correctly represents the topology of the road (turns, straight roads, roundabouts etc. ).

Week #3-4 (12-03 & 19-03) | Understanding developed code, thesis roadmap presentation and first development steps.

The third week of work was primarily spent on understanding the code previously created by former students that worked on the ATLASCAR project, mainly:

  • Visual perception unit, created by Diogo Correia as part of his masters thesis, more specifically the code that interfaces with the sensors, their calibration and the positioning of their respective coordinate frames.
  • Inclination module, developed by Armindo Silva, to understand how it calculates the vehicle orientation components (roll and pitch).
  • Dynamic map reconstruction, created by Pedro Salvado in his masters thesis, was the most thorough analysis, to understand and replicate how the local map reconstruction was achieved.

In this week there was also a small presentation for the LAR members and advisers in which we presented a crude tasks roadmap for the masters thesis followed by a discussion with the advisers.

In the fourth week began the code development.

The arduino code created by Armindo Silva was rebuilt to be used as a ROS node that uses RS232 communication to publish the orientation calculated from the sensor data.

In this first approach it was considered that the road is always flat. This simplification allows to ignore the spacial positioning of the vehicle in relation to the world frame, since in order to calculate it IMU and odometry data is necessary and is not available at the moment. So the efforts where focused on the relation between the car and the road, with the possibility of defining the relation between the road and the world in the future.

To do that two more coordinate frames where added to the car “robot” model, to represent the ground (static) and the centre car axis (transformation matrix calculated with the data sent from the orientation module).

Following that all the sensor frames where also changed to depend on the centre car axis (which moves according to the rotation of the vehicle), so that the measurements depend on a moving frame.

A ROS bag with the orientation information was created to test the system and the result was quite good.


View of the car and sensor frames (with static STL model)

View of the car and sensor frames (without STL model)


As we can see all the sensor frames move accordingly to the centre car axis.

View of the car and all sensor frames motion (static STL model)

Lastly it was also acquired a pointcloud dataset from the Sick LD-MRS sensor in order to verify its movement in relation to the movement of the car, but there was a small problem with the program that lead to some unexpected frame behaviour caused by the use of Rosbag, that needs to be corrected.

LIDAR sensor readings and representation in reference to the respective moving frame (/ld-mrs)

It was also created a GitHub repository where the developed code will regularly be updated. https://github.com/TiagoSMarques/Masters-Thesis_ATLASCAR2



Week #1-2 (26-02 & 5-03) | ROS Workshop, project research and preliminary report writing and discussion.

The last couple of weeks where spent mostly on the research and writing for the preliminary report. The research was focused on information gathering about the state of the ATLASCAR2 project, and the hardware and software solutions that are already implemented, specially focusing  on the hardware to be used, the Sick LD-MRS, as well as studying the problem for the thesis and the work that needs to be done to solve said problem. Naturally there were some oversights and a little confusion in relation to the work that was intended and the main guidelines of the project, but after the report discussion with my adviser, professor Vítor Santos , those misconceptions were corrected and complemented and  my understanding of the overall task in now much clearer.

This week there was also a workshop for the students that work at LAR, on ROS (Robot Operating System) given by professor Miguel Oliveira, in which we acquired the basic knowledge to work with the ROS environment, as well as some the tools this system provides that will be very useful for us to use in our projects.

The throughout the workshop all the members developed players for a game in the ROS environment as a way of learning and how this system works and its capabilities. Here’s a video showcasing the final game that was played, in which all the moving arrows represent a player and the objective is to hunt a team without being killed by the other,  the scoring is determined by the diference of kills and deaths for each player in a team.

In the end we gathered a good knowledge basis for ROS and its tools. Features like, driver packages for the main sensors that are to be used, the ease of data flux between nodes of the program, tools to easily deal with several data types like point clouds, laser scans, and others. This solutions that are already implemented  allow us to better focus on solving the problems and less on the hardware and the reading of its data, and is the main reason for the use for this environment.

Lastly by the end of week #2 my colleague Ricardo Silva and I, with the valuable help of another colleague Nuno Silva, were able to use the previous work done by Diogo Correia, and communicate and retrieve data from the sensors in ATLASCAR2 and visualizing them in Rviz tool, from the ROS system.

In the following week(s) I will be trying to focus my efforts on the first step of the project which is the creation of a ROS node for the integration of the vehicle rotation measurement module, as well as continuing to learn ROS syntax and taking a online C++ course from “http://www.learncpp.com/cpp-tutorial” that will be useful along the project.