With the increasing autonomy of vehicles, more and more precise systems for intelligent environment perception are needed. Since classical image processing algorithms are limited in their possibilities and accuracy, more and more reliance is being placed on Deep Learning-based systems, which in turn push today's hardware to its limits.
Multi-task learning offers the possibility to solve several computer vision tasks simultaneously and to combine several systems for intelligent environment perception. The required computational effort and the quality of the results can thus be improved. Multi-task learning is an integral part of efficiently realising Deep Learning-based systems for intelligent environment perception on today's embedded hardware.
One problem with computer vision-based environmental perception is that classical algorithms are limited in their precision, robustness and possibilities. For this reason, newer approaches try to solve these image processing tasks with the help of convolutional neural networks or other machine learning approaches. The problem with solving these tasks with convolutional neural networks is that they usually require large amounts of computing resources, but only solve one task at a time.
In a complex system for environmental perception, where multiple tasks need to be solved simultaneously, multiple convolutional neural networks would be required, which multiplies the computational resources, so such a system is currently out of the question for most real-world applications.
To reduce the required computational resources and improve the results and overall performance, more and more researchers are experimenting with multi-task learning based on convolutional neural networks.
Possible applications: Perception of the environment for driver assistance systems and autonomous vehicles
Possible applications: Improving the quality and runtime of multiple computer vision tasks