Annotating vast amounts of video information, containing petabytes of data, today is still a very difficult, tedious and almost impossible to achieve task. However automated driven vehicles, as an example, need a very detailed view of the environment they operate in. Unexpected obstacles such as potholes, work areas, lane restrictions and so on could easily put automated vehicles in a problematic situation. A fast, almost real-time, annotation of video data, produced by the vehicle’s on-board cameras, assisted by other sensors, is therefore imperative to realizing the vision of automated transport in the near future. The resulting annotated video streams improve Advanced Driver Assistance Systems (ADAS) and Map creation to benefit both cars and drivers. The goal of the European funded Large Scale Video Annotation in the Cloud or H2020 Cloud LSVA project is to use sensor fusion technology and Artificial Intelligence deep learning algorithms to realize semi-automated video annotation.

In the not so distant  future, you can well imagine cars will be equipped with multiple cameras and Lidar scanners just like all cars seem to have integrated parking sensors today.  The data and video streams produced by these sensors amount to terabytes after only a few hours of recording per vehicle. Making sense out of this data is a serious technical challenge. Especially annotating these large video streams and creating ground truth data sets for real-time scenery recognition is technically still difficult to achieve. Due to the huge amount of human interaction needed, this operation is too expensive, impractical, and very slow. In the automotive sector, where we want these data streams to feed Mapping and ADAS functionality, pure human annotation is not feasible.

The use of AI Analytics will improve this situation dramatically by applying technologies such as   3D point cloud analysis, Scene Recognition, Semantics, Computer Vision and Deep Learning. As such the Cloud LSVA project results represent a large step into the introduction of Level 3, 4 and even 5 automated cars in a somewhat further future. The recognition of vulnerable road users, such as pedestrians and cyclists, will be greatly improved, dramatically reducing accident casualties on European roads. Another important aspect is the so-called Simultaneous Location and Mapping capability of autonomous vehicles. A good, accurate map, updated in real-time, is indispensable for automated vehicles to operate. Combining input from such sensors as Cameras, Lidars and Radars with other sensors, internal or external, makes it possible for the car to assess exactly where it is and what objects and humans surround the vehicle.

To process the huge amounts of data, Cloud LSVA has implemented a cloud-based system, consisting of a number of layers which interact with each other by means of open and standardized interfaces. The cloud-based system is driven by state of the art PAAS and IAAS tools such as Docker and Apache Spark, which provide the required orchestration between the different components. The following figure shows the solutions as proposed by the Cloud LSVA project:

Figure 1 The Cloud LSVA Solution

Cloud LSVA is one of the projects that enable the mobility of the future. As a technical project it has a close relation to EU H2020 funded projects such as inLane, SAFE STRIP, AUTOPILOT, Vi-DAS and initiatives such as SENSORIS.

The project is driven by the best specialists in their domain: Vicomtech, CEA, Dublin City University, Intel, ERTICO, IBM, Intempora, TASS, TomTom, Technische Universiteit Eindhoven, University of Limerick and Valeo.

The project is funded by the EC in the H2020 framework and runs until the end of the year 2018. Interested stakeholders may already free some room in their November calendar for the projects’ exiting final event and TESTFEST! Do not hesitate to contact at ERTICO for more specific dates and details regarding the final event and TESTFEST. You will also find this information on the Cloud LSVA website at as soon as the event details become available.