Title: Reinforcement Learning in Transportation
Project duration: 2022-2025
Research Area: Engineering and Business
Our project aims to enhance autonomous driving through Deep Reinforcement Learning by addressing critical challenges in car following, obstacle avoidance, and simulation-to-real-world transfer. We tackle the issue of rare event handling in Deep Reinforcement Learning models by proposing novel training environments that generate diverse and challenging scenarios. Additionally, we introduce a two-step architecture for dynamic obstacle avoidance, combining supervised learning for collision risk estimation and Reinforcement Learning for enhanced situational awareness.
Leveraging vision-based Deep Reinforcement Learning, we develop an agent capable of simultaneous lane keeping and car following, demonstrating robust performance in simulation and real-world environments. Furthermore, we present a platform-agnostic framework for effective Sim2Real transfer in autonomous driving tasks, reducing the gaps between different platforms and enabling seamless deployment of trained agents. Finally, we propose modified Deep Reinforcement Learning methods that integrate real-world human driving experience, enhancing agent performance and applicability in real traffic scenarios. Through rigorous testing and validation, our project contributes to advancing the capabilities and reliability of autonomous driving systems.
Our project aims to improve autonomous driving using Deep Reinforcement Learning. We focus on enhancing car-following, obstacle avoidance, and Sim2Real transfer. We aim to advance the reliability and effectiveness of autonomous driving systems by:
The central question of our project is: How can Deep Reinforcement Learning be leveraged to improve autonomous driving by addressing challenges in car-following, obstacle avoidance, and Sim2Real transfer, ensuring robustness and effectiveness in real-world environments?
The project’s results are utilized in developing self-driving vehicles, enhancing their ability to handle diverse and challenging real-world scenarios such as car-following in congested traffic, dynamic obstacle avoidance, and smooth Sim2Real transfer. These advancements contribute to safer and more efficient autonomous transportation systems.
The project employs Deep Reinforcement Learning algorithms for training autonomous driving agents. It also utilizes vision-based systems for perception tasks, recurrent neural networks for collision risk estimation, and simulation environments like CARLA for training and testing. Additionally, it leverages stochastic processes to generate diverse training scenarios and platform-agnostic frameworks for seamless Sim2Real transfer.
The project’s impact lies in advancing the reliability and effectiveness of autonomous driving systems through innovative Deep Reinforcement Learning approaches. Future research could focus on refining training environments to capture even more diverse scenarios, enhancing Sim2Real transfer techniques for seamless deployment, and further integrating human driving experience to improve agent performance in real-world traffic. Additionally, exploring the application of these techniques to other domains beyond autonomous driving could broaden the project’s impact.