Better Learning to Drive Autonomously with Proximal-Policy Reinforcement Learning and Visual Perception Representations

dc.contributor.authorJirasak Sittigorn
dc.contributor.authorRacha Tungtrakool
dc.contributor.authorJirayu Petchhan
dc.date.accessioned2026-05-08T19:25:45Z
dc.date.issued2025-11-12
dc.description.abstractThis study presents an approach based on a deep reinforcement learning framework for vision-based autonomous driving in the CARLA environment, focusing on urban driving tasks. Our study implements a sub-policy Proximal Policy Optimization (PPO) algorithm, demonstrating its effectiveness in navigating complex scenarios including lane following, straight driving, and left/right turns, and outperforming a single-policy approach for intersection maneuvers. To enhance learning efficiency, our representation learning leverages a deep mobile network for state representation, which significantly reduces image feature complexity. Furthermore, the integration of a single-shot multi-box detector enables the agent to perform realistic tasks such as responding to traffic lights and maintaining safe distances from leading vehicles, without compromising training speed. While the system demonstrates stable driving in various scenarios, current limitations include handling highly complex decisions and adapting to diverse speed limits due to environmental constraints. Future work will focus on expanding training environments and exploring more advanced network architectures to improve real-world applicability and learning efficiency.
dc.identifier.doi10.1109/incit66780.2025.11276043
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/20273
dc.subjectAutonomous Vehicle Technology and Safety
dc.subjectAdvanced Neural Network Applications
dc.subjectReinforcement Learning in Robotics
dc.titleBetter Learning to Drive Autonomously with Proximal-Policy Reinforcement Learning and Visual Perception Representations
dc.typeArticle

Files

Collections