Week #6 (02-04) | Vehicle positioning and orientation for road reconstruction

As indicated in last weeks post, the next task was to integrate the positioning of the car relative to the world frame in the road reconstruction process.

To achieve this, the global positioning data given by the GNSS system implemented in the vehicle was gathered and converted from geographical coordinates to Cartesian coordinates to allow for representation and transform calculations between frames. This data appears to be reliable as it already incorporates data from the on-board IMU which is used to correct de GPS measurements, so the positioning is relatively accurate and usable. In addiction to the global XYZ positioning it was also gathered from de IMU the rotation of the vehicle (Yaw), which is an important component to correctly represent the road. A marker was also put into effect in Rviz, to represent the path travelled by the car.

The result of all the steps mentioned above is a pointcloud that correctly represents the geometry of the road and the position of the vehicle in it:

Screenshot_20180406_163426Screenshot_20180407_155024

However there is an issue with the accumulation of the laser scans, as we can see in the image below, the scans are being collected at the right time (point-cloud towards the positive side of the frames) but by the time they are aggregated in the road point-cloud the vehicle is already some distance ahead so the representation is not as it should be.

Screenshot_20180407_170211

In an attempt to detect the origin of the problem a filter was applied to the laser scan point-cloud (the one in front of the car) to reduce the number of points to be assembled. The result can be seen in the images bellow:

 

The images represent the reconstruction of the road in the exact same instant in time for the two study cases. The original scan in the left and to the right, the filtered scan which translates in a 72% downgrade to the size of the laser point-cloud. The difference in the representation is noticeable (about 4,5m closer) but not enough to completely solve the issue, so further work will need to be put into the cloud assembling node in order to speed up the scan accumulation.

Next week will be focussed on improving the point-cloud assembling node and preparing the resulting point cloud to extract the road profile from it.

As for the issues mentioned in the previous post, the orientation of the LIDAR was corrected, and a faster and more reliable method for trimming the point cloud was implemented and it now correctly cleans the edges of the point-cloud in a defined bounding box around the vehicle.

Anúncios