Better Learning to Drive Autonomously with Proximal-Policy Reinforcement Learning and Visual Perception Representations
Loading...
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This study presents an approach based on a deep reinforcement learning framework for vision-based autonomous driving in the CARLA environment, focusing on urban driving tasks. Our study implements a sub-policy Proximal Policy Optimization (PPO) algorithm, demonstrating its effectiveness in navigating complex scenarios including lane following, straight driving, and left/right turns, and outperforming a single-policy approach for intersection maneuvers. To enhance learning efficiency, our representation learning leverages a deep mobile network for state representation, which significantly reduces image feature complexity. Furthermore, the integration of a single-shot multi-box detector enables the agent to perform realistic tasks such as responding to traffic lights and maintaining safe distances from leading vehicles, without compromising training speed. While the system demonstrates stable driving in various scenarios, current limitations include handling highly complex decisions and adapting to diverse speed limits due to environmental constraints. Future work will focus on expanding training environments and exploring more advanced network architectures to improve real-world applicability and learning efficiency.