Home | Datasets | Evaluation | Examples

Update [15.04.2024]

This website is discontinued for now, for information on the Benchmark please refer to the main repository.

Update [31.03.2024]

We are happy to announce, that we have released a preview version for a major revision of the benchmark.

The STSC Benchmark

Existing benchmarks targeting the overall performance of trajectory prediction models lack the possibility of gaining insight into a model’s behavior under specific conditions. Towards this end, a new benchmark aiming to take on a complementary role compared to existing benchmarks is proposed. It consists of synthetically generated and modified real-world trajectories from established datasets with scenario-dependent test and training splits. The benchmark provides a hierarchy of three inference tasks, representation learning, de-noising, and prediction, comprised of several test cases targeting specific aspects of a given machine learning model. This allows a differentiated evaluation of the model’s behavior and generalization capabilities. As a result, a sanity check for single trajectory models is provided aiming to prevent failure cases and highlighting requirements for improving modeling capabilities.

Coming Soon

The benchmark, including code and evaluation instructions, will be available soon.

Contact

Having questions about our benchmark? Feel free to contact Ronny Hug or Stefan Becker.