Skip to content

What have AVs have struggled with so far, and what is likely to happen next?

Autonomous vehicles are in theory very safe... or at least they could be. Unfortunately, what's been blasted around our news feeds are spectacular examples of road scenarios which AVs fail to navigate, but which we know humans would find no trouble at all.

Image title

Ultimately, this comes down to the AV's familiarity with what they are encountering on the road. They will excel in traffic scenarios they were trained against, but struggle when faced with an object or set of environmental features they have never come across before.

Using the AV Sandbox Dataset, we are going to explore some examples of road scenarios which AVs have failed to navigate so far, and what they could struggle with next as they are rolled out in a wider range of locations across the globe.

Gather examples of incidents, and all their information

To jump right in, open the "AV failures mentioned in the media" view by searching in the navigation drawer, or by clicking the following link:

Open Autonomous Vehicle Sandbox in conode →

Image title

Turning on Node Labels

Upon first opening the view, not all node labels will be visible. This is because only some nodes have a Show Label node property, whereas the others will have Hide Label. There are two methods of toggling the visibility of node labels in conode:

Toggling label visibility at the Node level

We can easily update the label visibility for single or multiple nodes at a time using the following steps:

  1. Highlight the nodes
  2. Go to the Node(s) part of the menu bar and click Properties
  3. In the Node Properties window, select the Show Labels option, located just under the node label text box, then press OKAY to save your updates

The nodes you highlighted should now have visible labels!

Toggling label visibility at the View level

We can toggle the visibility at the view level using the following buttons in the view menu:

Image title

Use either method to turn on the label visibility for all nodes in this view.

Opening up the Spatial View

As the scenarios we will be exploring in this workflow were sourced from real life, we can use a spatial view to understand where they took place. For this, we will use the view "Example Data Sources - Spatial View".

Open Views in conode →

Image title

View Content Overview

The four blue nodes positioned down the center of the "AV failures mentioned in the media" view are examples of AV failures collected from news reports in the media, while the purple nodes on the left-hand side are taxonomy annotations which describe what happened in each scenario, e.g. the type of collision that was involved or features of the local environment.

All nodes contained in the Spatial View represent real-world incidents that have taken place, four of which you may notice are the AV failure incidents also present in our LHS view. Check out the previous workflow Exploring Heterogeneous Datasets from Across the Globe for a more in-depth exploration of this spatial view.


What has gone wrong already?

Watching examples of incidents and finding out where they took place

We can quickly find out where in the world these four scenarios took place by highlighting them in the "AV failures mentioned in the media" view and then observing where these same nodes light up in the Spatial view. To watch the videos attached to these nodes use the Watch Mediabutton in the Node(s) menu, or hit the letter w on your keyboard.

We find that these scenarios took place in California and Arizona, which perhaps we could have guessed due to the large number of autonomous vehicle deployments in these states!

Using annotations to understand the story behind each incident

We can easily navigate from any data point to all its descriptive annotations using the following two steps:

  1. Highlight the data point of interest, in this case any of the AV failures (blue nodes)
  2. Press the up-arrow key or the Select Predecessors button in the Select menu bar

You will now have all the annotations that connect to this data point highlighted in bright blue

We can also explore more AV failure mode examples using such annotations:

  1. Selecting the header node Everything that has gone wrong with AVs in the Taxonomy view
  2. Press the down-arrow key or the Select Successors button in the Select menu bar

Step 2 highlights all nodes which are successors of Everything that has gone wrong with AVs - notice how 78 scenarios have been highlighted in the spatial view! Zoom in to see the spatial distribution of these incidents, and then hit w to watch their videos.

What is going to go wrong next?

We can use these examples of challenging scenarios autonomous vehicles have faced on the road so far to discover more AV failure modes from within the knowledge graph. To do so, let's open up the view "Scenario Embedding". We will also no longer need the Spatial view so go ahead and close it.

Image title

What is an embedding?

Scenario databases are highly dimensional: there are thousands of features that could describe each edge-case scenario that takes place on the road. Think of all the types of actors involved, and properties they may have, on top of all the environment and trajectory based features that would be required to reconstruct each incident.

But in order to understand the failure modes of any system, we need to be able to visualise the similarities, differences, and patterns in the datasets used to test, train and validate them. So how can we cluster these scenarios when they each have so many features?

The similarity and difference between scenarios in such a database can be mapped out by analysing the clusters formed when the scenarios are plotted in this high-dimensional space. However, in order to be able to visualise such clusters, we must reduce this into two-dimensions such that it can be plotted on your screen.

Through mathematical transformations and dimensionality reduction techniques, such as t-SNE (t-distributed Stochastic Neighbor Embedding) or PCA (Principal Component Analysis), data points can be strategically positioned in a two-dimensional plane. The positioning is such that similar data points are placed closer together, and dissimilar ones are spaced farther apart, reflecting the underlying relationships within the data.

Embeddings are the result of this dimensionality reduction; they allow us to condense and explore information in a more intuitive manner by representing complex data in a simplified, two-dimensional space which preserves the inherent relationships between data points.

In this embedding, scenarios are positioned based on their similarity meaning that scenarios near each other will share features, and scenarios further apart are inherently different. Therefore, we make the assumption that scenarios positioned in the vicinity of one of our AV failures are scenarios which AVs may also struggle with if they were encountered during a deployment.

Scenarios with Similar Stories

Let's look at some examples of scenarios with similar stories to the 4 we've seen so far by highlighting all the nodes in the vicinity of each AV failure and hitting w to watch the available videos. Note that only some scenarios have their videos accessible in this sandbox dataset - those connected to the "Scenario video available" header node which can be found in the "Full Scenario Taxonomy" view.

We can also look at more examples of similar edge case scenarios by selecting the most important annotation for each AV failure mode (e.g. emergency vehicle for the scenario titled "AV hit by firetruck...") and highlight its successors using the down-arrow. The nodes highlighted in the embedding view are scenarios with a similar story.

Other Edge Cases

We have just explored clusters of scenarios which belong to the same families as failure modes we've seen AVs face so far, but what other scenarios should AVs be tested, trained, and validated against?... Explore the rest of the embedding to find out!

Can you find the following scenarios?

  • Pedestrians moving with erratic behaviour (unclear intentions, running and tripping in the lane)
  • Pedestrians on roller skates?
  • Multi-body objects falling out of leading vehicles?

Last update: 2024-10-31