Detecting critical cut-in-scenarios

Detecting critical cut-in-scenarios

Detecting critical cut-in-scenarios


Regardless of whether it is highly developed driver assistance systems or robot taxis, safety plays a decisive role. There are two aspects to consider here: Functional Safety (FuSi) [1] ensures that errors in the system itself are avoided, while Safety Of The Intended Functionality (SOTIF) [2] ensures that all situations for which the function was developed can be mastered.

But how is the goal of SOTIF - this security of the target function [3] - achieved at all? Trying out all possible scenarios is not feasible because their number is usually too large, if not infinite. However, since it is assumed that most situations do not cause any problems, the focus can be shifted to the search for critical scenarios in which the driving function fails or at least experiences major difficulties. The principle of "quality before quantity" thus applies to test scenarios: the number of kilometres driven in real life or in simulation is thus not that important.

The targeted search for critical test cases has been attracting more and more interest for some time. For example, a group of researchers from the University of Michigan has developed a method to optimise the behaviour of other road users in such a way that the tested system has difficulty coping with it [4]. Another group of scientists from Sweden has used a genetic search procedure to find a large number of scenarios in which Baidu Apollo failed in the simulation [5].

We are working to further improve scenario search by combining causality and machine learning. As a rule, classical machine learning (ML) predicts and detects exclusively on the basis of correlations. Causality is missing here. An ML system can then still deliver good results, but it does not "know" how or why they come about.

We remedy this by observing causal relationships. We first arrange the various variables and influencing factors that we can quantify in a causal graph. In this graph, a directed edge always leads from a cause to an effect. In addition, the mechanisms are important in order to calculate the values of the effect variables depending on the cause variables. The mechanisms can usually be divided into a deterministic term and a probabilistic noise. Both can be determined with data as in classical machine learning.

One advantage of these methods is that one can more reliably determine the effects of setting a certain variable value of a cause on its effects. They can also be used to answer so-called counterfactual questions. These are questions of the type: What would have happened if? Or in more detail: What would have been the effect in a very specific case if the cause had had a different value?

Our goal is to build a causal model with the help of subject matter experts and traffic data sets that will help us find critical Einscher scenarios. It will help us to better narrow down the search space of scenarios and to find the so-called corner cases more easily.

Interested in this topic? Please feel free to send us a message.

Related Posts