Search

Events

Currently there are no events.

Microscopic Modelling of mixed traffic entities using Imitation Learning

In many road designs, different types of agents (pedestrians, cyclists, vehicles) use the same surface. Simulations can be used to study the safety implications a road design has on the agents. As the agent‘s behavior plays a major role, it should be modeled accurately. Additionally, the simulation should execute fast enough to allow for large-scale studies and the model should be able to explain its decisions. In reality many factors influence the navigation decisions of the traffic agent, making traffic modelling a hard problem for simple rule-based solutions.

For that data-driven approaches like supervised machine learning were applied to model an agent’s trajectory. While rule-based approaches like the Social Force Model offer better explanations than data-driven approaches, the latter provide more accurate solutions. Recently Reinforcement Learning methods have been applied to address the problem of traffic modelling. In that regard the modelling problem should be formulated as a states to actions mapping problem. One branch of Reinforcement Learning that can be used for that is Imitation Learning.Two main methodologies are included in imitation learning, namely Behavior Cloning and Inverse Reinforcement Learning. Both implement it by extracting a policy out of experts’ behavior dataset in an attempt to imitate this behavior. This should be able to generate a policy that replicates the traffic entities actions in any given state.

In this doctoral project, an Imitation Learning approach for agent trajectories modelling will be explored and evaluated against performance, accuracy and explainability metrics.

Researcher: Yasin Yousif, M. Sc.