Multi-Head Attention for Joint Multi-Modal Vehicle Motion Forecasting - Archive ouverte HAL Access content directly
Conference Papers Year : 2020

Multi-Head Attention for Joint Multi-Modal Vehicle Motion Forecasting

(1, 2) , (3, 2) , (2) , (1) , (1) , (2)
1
2
3

Abstract

This paper presents a novel vehicle motion forecasting method based on multi-head attention. It produces joint forecasts for all vehicles on a road scene as sequences of multi-modal probability density functions of their positions. Its architecture uses multi-head attention to account for complete interactions between all vehicles, and long short-term memory layers for encoding and forecasting. It relies solely on vehicle position tracks, does not need maneuver definitions, and does not represent the scene with a spatial grid. This allows it to be more versatile than similar model while combining many forecasting capabilities, namely joint forecast with interactions, uncertainty estimation, and multi-modality. The resulting prediction likelihood outperforms state-of-the-art models on the same dataset.
Fichier principal
Vignette du fichier
ArticleICRA_social_attention_v2.pdf (1.45 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02860895 , version 1 (08-06-2020)

Identifiers

  • HAL Id : hal-02860895 , version 1

Cite

Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, et al.. Multi-Head Attention for Joint Multi-Modal Vehicle Motion Forecasting. IEEE International Conference on Robotics and Automation, May 2020, Paris, France. ⟨hal-02860895⟩
185 View
308 Download

Share

Gmail Facebook Twitter LinkedIn More