Annotating vast amounts of video information for autonomous vehicles is today still a very difficult, tedious and almost impossible task to achieve. Automated driven vehicles need a very detailed view of the environment they operate in: unexpected obstacles such as potholes, work areas, lane restrictions and so on could easily put them in a problematic situation, therefore, a fast, almost real-time annotation of video data produced by the vehicle’s on-board cameras is imperative to realise the vision of automated transport in the near future.

On November 29, discover the latest technology for video annotation developed by the EU- funded project Cloud LSVA during this final event organised in Stuttgart, Germany.

Registrations are open, click here to reserve your spot and to view the agenda.

Participants will have the opportunity to experience first hand the technology developed by European and international companies at the forefront of automation.

 

The Cloud LSVA project results represent a large step into the introduction of Level 3, 4 and even 5 automated cars in a somewhat further future. The recognition of vulnerable road users, such as pedestrians and cyclists, will be greatly improved, dramatically reducing accident casualties on European roads. Another important aspect is the so-called Simultaneous Location and Mapping capability of autonomous vehicles. A good, accurate map, updated in real-time, is indispensable for automated vehicles to operate. Combining input from such sensors as Cameras, Lidars and Radars with other sensors, internal or external, makes it possible for the car to assess exactly where it is and what objects and humans surround the vehicle.