JavaScript is required to use this site. Please enable JavaScript in your browser settings.

Reinforcement Learning in Transportation

Title: Reinforcement Learning in Transportation

Project duration: 2022-2025

Research Area: Engineering and Business

Our project aims to enhance autonomous driving through Deep Reinforcement Learning by addressing critical challenges in car following, obstacle avoidance, and simulation-to-real-world transfer. We tackle the issue of rare event handling in Deep Reinforcement Learning models by proposing novel training environments that generate diverse and challenging scenarios. Additionally, we introduce a two-step architecture for dynamic obstacle avoidance, combining supervised learning for collision risk estimation and Reinforcement Learning for enhanced situational awareness.

Leveraging vision-based Deep Reinforcement Learning, we develop an agent capable of simultaneous lane keeping and car following, demonstrating robust performance in simulation and real-world environments. Furthermore, we present a platform-agnostic framework for effective Sim2Real transfer in autonomous driving tasks, reducing the gaps between different platforms and enabling seamless deployment of trained agents. Finally, we propose modified Deep Reinforcement Learning methods that integrate real-world human driving experience, enhancing agent performance and applicability in real traffic scenarios. Through rigorous testing and validation, our project contributes to advancing the capabilities and reliability of autonomous driving systems.

Aims

Our project aims to improve autonomous driving using Deep Reinforcement Learning. We focus on enhancing car-following, obstacle avoidance, and Sim2Real transfer. We aim to advance the reliability and effectiveness of autonomous driving systems by:

  • addressing rare event handling,
  • integrating human driving experience, and
  • developing robust training frameworks

Problem

The central question of our project is: How can Deep Reinforcement Learning be leveraged to improve autonomous driving by addressing challenges in car-following, obstacle avoidance, and Sim2Real transfer, ensuring robustness and effectiveness in real-world environments?

Practical example

The project’s results are utilized in developing self-driving vehicles, enhancing their ability to handle diverse and challenging real-world scenarios such as car-following in congested traffic, dynamic obstacle avoidance, and smooth Sim2Real transfer. These advancements contribute to safer and more efficient autonomous transportation systems.

Technology

The project employs Deep Reinforcement Learning algorithms for training autonomous driving agents. It also utilizes vision-based systems for perception tasks, recurrent neural networks for collision risk estimation, and simulation environments like CARLA for training and testing. Additionally, it leverages stochastic processes to generate diverse training scenarios and platform-agnostic frameworks for seamless Sim2Real transfer.

Outlook

The project’s impact lies in advancing the reliability and effectiveness of autonomous driving systems through innovative Deep Reinforcement Learning approaches. Future research could focus on refining training environments to capture even more diverse scenarios, enhancing Sim2Real transfer techniques for seamless deployment, and further integrating human driving experience to improve agent performance in real-world traffic. Additionally, exploring the application of these techniques to other domains beyond autonomous driving could broaden the project’s impact.

Publications

  • Hart, F., Okhrin, O., and M. Treiber, Towards robust car-following based on deep reinforcement learningTransportation Research Part C: Emerging Technologies, 104486, 2024, DOI: 10.1016/j.trc.2024.104486
  • Hart, F., and Okhrin, O., Enhanced method for reinforcement learning based dynamic obstacle avoidance by assessment of collision riskNeurocomputing 568, 2024, 127097, DOI: 10.1016/j.neucom.2023.127097
  • Li, D., and Okhrin, O. Modified DDPG car-following model with a real-world human driving experience with CARLA simulatorTransportation Research Part C: Emerging Technologies, 147, 2023, 103987, DOI: 10.1016/j.trc.2022.103987
  • Paulig, N. and Okhrin, O., Robust path following on rivers using bootstrapped reinforcement learningOcean Engineering, 298, 117207, 2024, DOI: 10.1016/j.oceaneng.2024.117207
  • Hart, F., Okhrin, O., and Treiber, M., Vessel-following model for inland waterways based on deep reinforcement learningOcean Engineering 281, 2023, 114679, DOI: 10.1016/j.oceaneng.2023.114679
  • Waltz, M., and Okhrin, O., Spatial–temporal recurrent reinforcement learning for autonomous shipsNeural Networks 165, 2023, pp. 634-653, DOI: 10.1016/j.neunet.2023.06.015
  • Hart, F., Waltz, M., and O. Okhrin, Two-step dynamic obstacle avoidance, arXiv preprint arXiv:2311.16841, 2023
  • Li, D., and O. Okhrin, Vision-based DRL Autonomous Driving Agent with Sim2Real Transfer, arXiv preprint arXiv:2305.11589, 2023
  • Li, D., and O. Okhrin, A Platform-Agnostic Deep Reinforcement Learning Framework for Effective Sim2Real Transfer in Autonomous Driving, arXiv preprint arXiv:2304.08235, 2023
  • Waltz, M, and O. Okhrin, Two-sample testing in reinforcement learning, arXiv preprint arXiv:2201.08078, 2022

Team

Lead

Team Members

  • Dianzhao Li
  • Martin Waltz
funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.