Skip to Main content Skip to Navigation
Conference papers

Multi-Head Attention for Joint Multi-Modal Vehicle Motion Forecasting

Abstract : This paper presents a novel vehicle motion forecasting method based on multi-head attention. It produces joint forecasts for all vehicles on a road scene as sequences of multi-modal probability density functions of their positions. Its architecture uses multi-head attention to account for complete interactions between all vehicles, and long short-term memory layers for encoding and forecasting. It relies solely on vehicle position tracks, does not need maneuver definitions, and does not represent the scene with a spatial grid. This allows it to be more versatile than similar model while combining many forecasting capabilities, namely joint forecast with interactions, uncertainty estimation, and multi-modality. The resulting prediction likelihood outperforms state-of-the-art models on the same dataset.
Document type :
Conference papers
Complete list of metadata

Cited literature [27 references]  Display  Hide  Download
Contributor : Guillaume Sandou Connect in order to contact the contributor
Submitted on : Monday, June 8, 2020 - 4:32:07 PM
Last modification on : Tuesday, July 20, 2021 - 3:06:27 AM


Files produced by the author(s)


  • HAL Id : hal-02860895, version 1


Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, et al.. Multi-Head Attention for Joint Multi-Modal Vehicle Motion Forecasting. IEEE International Conference on Robotics and Automation, May 2020, Paris, France. ⟨hal-02860895⟩



Record views


Files downloads