Data Simulation for Improving Autonomous Driving

Results of this project are explained in the final report.

 

Apply to this project here

Motivation: Autonomous driving vehicles need to go through a lot of road testing before they can actually be used in transportation. Using real road testing to optimize autonomous driving algorithms is time-consuming and capital-intensive, and testing on open roads has safety implications and regulatory constraints. In addition, extreme weather conditions and complex traffic scenarios are difficult to be reproduced in reality. As a result, simulation test for autonomous driving is widely adopted by researchers and engineers. According to statistics, about 90% of autonomous driving algorithm tests are completed through simulation platforms. To facilitate the development and deployment of autonomous driving algorithms, we plan to develop a free, visually and physically realistic, fully functional and easy-to-use simulator.

Goals: 1. Build a digital twin of a real city, such as Garching or Munich. This city should contain all static objects marked by the Cityscapes dataset [1], such as roads, traffic signs, roadside buildings, plants, etc. These objects should be as close as possible to real objects in scale, shape, and visually. 2. Collect or make 3D models of moving objects on the road. The types of these models should be consistent with the objects marked in the KITTI dataset [2]. 3. Deploy traffic flow [3] in the city, let the vehicle model run according to traffic rules. 4. Make a smart car model that can integrate algorithms such as environment perception, decision making and path planning, and verify the performance of these algorithms in this city.

Requirements: 1. Proficient in using 3D modeling software, such as 3ds max, Maya and Blender.
2. Familiar with 3D animation engines, such as Unity3D and Unreal Engine.
3. Familiar with C# and Python programming languages.
4. Understand deep learning related knowledge.

[1] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
[2] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012, pp. 3354–3361.

[3] www.eclipse.org/sumo/

Important notice

Accepted students to this project should attend online workshops at the LRZ in April 2023 before the semester starts, unless they have proven knowledge. More information will be provided to students accepted to this project.