Learning to Drive with Deep Reinforcement Learning

dc.contributor.authorNut Chukamphaeng
dc.contributor.authorKitsuchart Pasupa
dc.contributor.authorMartin Antenreiter
dc.contributor.authorPeter Auer
dc.date.accessioned2026-05-08T19:20:38Z
dc.date.issued2021-1-21
dc.description.abstractAutonomous driving cars are important due to improved safety and fuel efficiency. Various techniques have been described to consider only a single task, for example, recognition, prediction, and planning with supervised learning techniques. Some limitations of previous studies are: (1) human bias from human demonstration; (2) the need for multiple components such as localization, road mapping etc. with a complicated fusion logic; (3) in reinforcement learning, the focus was mostly on the learning algorithms but less on the evaluation of different sensors and reward functions. We describe end-to-end reinforcement learning for an autonomous car, which used only a single reinforcement learning model to create the autonomous car. Further, we designed a new efficient reward function to make the agent learn faster (18% improvement for all settings compared to the baseline reward function) and build the car with only the necessary perceptions and sensors. We show that it performed better with state-of-the-art off-policy reinforcement learning for continuous action (SAC, TD3).
dc.identifier.doi10.1109/kst51265.2021.9415770
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/17617
dc.subjectReinforcement Learning in Robotics
dc.subjectAutonomous Vehicle Technology and Safety
dc.subjectAdversarial Robustness in Machine Learning
dc.titleLearning to Drive with Deep Reinforcement Learning
dc.typeArticle

Files

Collections