Energy-Efficient Continual Learning for Autonomous Driving

dc.contributor.authorQi Ding Ng
dc.contributor.authorChu Kiong Loo
dc.contributor.authorKitsuchart Pasupa
dc.contributor.authorNat Dilokthanakul
dc.contributor.authorJie Zhang
dc.date.accessioned2026-05-08T19:23:38Z
dc.date.issued2023-10-26
dc.description.abstractOur work highlighted the primary challenges of Autonomous Driving (AD), namely the Catastrophic Forgetting (CF) of previous knowledge by the AD system upon new scenario encounters. Considering the infeasible model retraining with past data given computational, power, and storage constraints on the embedded device, we proposed an experiment featuring Avalanche Continual Learning (CL) training strategies to investigate which strategies excel in this task and combine the promising ones in the hope for a more balanced and efficient trade-off between performance and energy consumption. Our experiment unprecedentedly validated the candidates against a new benchmark introducing natural distribution change and time correlation between input images. We found that although a synergy of CL strategies yields higher resistance towards CF, the slight accuracy gain is not worth the additional computation when we account for energy consumption, rendering a simple Replay strategy the best solution for the Continual Learning benchmark for Autonomous Driving: Online Continual Classification (CLAD-C). Our proposal delivers a 65.80% improvement over the baseline at our proposed accuracy-power ratio metric.
dc.identifier.doi10.1109/icitee59582.2023.10317642
dc.identifier.urihttps://dspace.kmitl.ac.th/handle/123456789/19167
dc.subjectDomain Adaptation and Few-Shot Learning
dc.subjectMultimodal Machine Learning Applications
dc.subjectAdvanced Neural Network Applications
dc.titleEnergy-Efficient Continual Learning for Autonomous Driving
dc.typeArticle

Files

Collections