Learning to Drive with Deep Reinforcement Learning
| dc.contributor.author | Nut Chukamphaeng | |
| dc.contributor.author | Kitsuchart Pasupa | |
| dc.contributor.author | Martin Antenreiter | |
| dc.contributor.author | Peter Auer | |
| dc.date.accessioned | 2026-05-08T19:20:38Z | |
| dc.date.issued | 2021-1-21 | |
| dc.description.abstract | Autonomous driving cars are important due to improved safety and fuel efficiency. Various techniques have been described to consider only a single task, for example, recognition, prediction, and planning with supervised learning techniques. Some limitations of previous studies are: (1) human bias from human demonstration; (2) the need for multiple components such as localization, road mapping etc. with a complicated fusion logic; (3) in reinforcement learning, the focus was mostly on the learning algorithms but less on the evaluation of different sensors and reward functions. We describe end-to-end reinforcement learning for an autonomous car, which used only a single reinforcement learning model to create the autonomous car. Further, we designed a new efficient reward function to make the agent learn faster (18% improvement for all settings compared to the baseline reward function) and build the car with only the necessary perceptions and sensors. We show that it performed better with state-of-the-art off-policy reinforcement learning for continuous action (SAC, TD3). | |
| dc.identifier.doi | 10.1109/kst51265.2021.9415770 | |
| dc.identifier.uri | https://dspace.kmitl.ac.th/handle/123456789/17617 | |
| dc.subject | Reinforcement Learning in Robotics | |
| dc.subject | Autonomous Vehicle Technology and Safety | |
| dc.subject | Adversarial Robustness in Machine Learning | |
| dc.title | Learning to Drive with Deep Reinforcement Learning | |
| dc.type | Article |