Artificial Intelligence can make the difference.
Although researchers have been investigating the field of artificial intelligence (AI) for many decades now, it’s been just within the past few years that AI has gained considerable traction and has become quite popular. Today, AI can be found in many applications such as speech recognition, face detection, or industrial robots. Furthermore, AI is often cited as the key technology to enable autonomous driving. But what are the major challenges in autonomous driving and how can AI help to solve them?
While driver assistance functions nowadays cover individual and dedicated scenarios, autonomous driving steps up to cover even complex situations and rare events while ensuring that functions react safely under all circumstances. For example, a common urban scenario with several different traffic participants, traffic signs, and traffic lights can be challenging even for humans, especially if they have not driven there before (see Fig. 1). With AI methods such as “learning from examples” and an appropriate data set, we can develop systems that are able to detect and classify traffic participants regardless of the particular environment or city.
Unfortunately, these systems are supposed to work in challenging situations, too, where data collection may be difficult due to the low likelihood of encountering these situations typical test runs. One major challenge, therefore, is to develop AI methods that also provide uncertainty metrics in addition to discrete outputs or simple yes/no decisions. Looking at Fig. 2, how would you as a human driver decide whether there is a car at the location of the boxes or not? You would rather think that there is a car in the nearer box with a larger probability and a smaller uncertainty and vice versa for the distant box. These probabilities and uncertainties will have to be provided by an AI system as well.
Even if the system can detect and classify every relevant object perfectly, it will also have to analyze these objects’ past behavior and predict these objects’ (short-term) future behavior as well. For example, in Fig. 3 would you assume that the second pedestrian is crossing? While this is quite a simple example, typical scenes involve many more interacting objects such as vehicles, pedestrians, cyclists and motorcycles. Especially in those situations with various possible actions, an AI system can outperform a human driver thanks to its ability to compute the outcome of nearly every possible event and estimate the most probable behavior for all traffic participants.
Furthermore, the third major challenge of an AI system for autonomous driving will be the choice of the correct driving policy. For example, there are many cities throughout the world where a defensive and conservative driving policy will stop the vehicle completely. Therefore, the system needs to observe and estimate common driving policies of its surrounding drivers. A good system will compute the best mixture of different policies such as “go with the flow”, “stick to my route” or “enforce merging”, for example, while still avoiding collisions – see Fig. 4.
We at Bosch have a broad and long-term experience in the automotive systems business, in driver assistance systems, and in embedded systems. With the foundation of the Bosch Center for Artificial Intelligence, Bosch also made a considerable investment into building and extending competencies in AI technologies. Based on this foundation, we are currently developing systems for autonomous driving which utilize artificial intelligence. On the perception level, for example, we realize sensor data fusion in a generic, flexible, and efficient framework, which scales easily from one single sensor up to full sensor set.
Even if we apply AI methods to data from a limited field of view, e.g. common front view, computational complexity becomes crucial. Assume a frontal RGB camera, a frontal radar sensor, and a frontal LiDAR sensor with the same field-of-view is being fused pixel-wise; this will lead to 10 channels of HD resolution and approximately 24 million input variables. However, for autonomous driving the sensor set will be even larger and may contain up to 24 Megapixel of image data and up to 1 million 3D points from LiDAR and Radar. You may apply state-of-the-art AI methods, e.g. deep neural networks, to this data, but computation time will increase considerably. We therefore have developed dedicated AI methods that perform best for various sensor sets, that account for degraded sensors, and that run on embedded hardware.
Authors: Timm Fabian & Lothar Baum, Robert Bosch GmbH, Division of Chassis Systems Control