About the Track

User simulation offers a scalable alternative to expensive user studies for evaluating interactive search. However, the community lacks standardized methods for validating simulators and trusting their results, which hinders their widespread adoption and slows progress. This track aims to establish a systematic framework for evaluating user simulators, understand the criteria for what makes a simulator "good enough," and create best practices for simulation-based evaluation.

Track Tasks

Evaluation Setup

Tasks

Develop simulators that can realistically mimic human behavior when interacting with a conversational search system.
  • Task 1: Turn-level next utterance prediction
  • Task 2: Session-level end-to-end conversation generation

Setup

Organizers provide a set of conversational agents and an API that mediates interactions.
  • Participants can implement their simulators with their preferred technology stack
  • We will provide a skeleton code including the required API routes and libraries

Timeline

Important Dates

December 2025

Track Announced

Official track announcement and call for participation.

April 2026

Guidelines Release

Detailed track guidelines and training dataset available.

May 2026

Test Data Collection Begins

Collection of test data for simulator evaluation.

August 2026

Track API Opens

Track API is available for simulator development.

September 2026

Submissions Due

Submission of simulations for evaluation.

November 2026

TREC Conference

Presentation of results at the TREC conference.

Resources

Getting Started

Access datasets, documentation, and related workshops to help you develop and evaluate your user simulation systems.

Planning session

Slides presented at the TREC 2026 User Simulation Track planning session.

Sim4IA Workshops

Related workshop on Simulations for Information Access at SIGIR.

Meet the Team

Organizers

Krisztian Balog

University of Stavanger, Norway

Nolwenn Bernard

TH Köln, Germany

Timo Breuer

TH Köln, Germany

Marcel Gohsen

Bauhaus-Universität Weimar, Germany

Christin Katharina Kreutz

TH Mittelhessen, Germany

Andreas Kruff

TH Köln, Germany

Philipp Schaer

TH Köln, Germany

Paul Thomas

Microsoft, Australia

ChengXiang Zhai

University of Illinois at Urbana-Champaign, USA

Get Involved

Join the TREC User Simulation Mailing List

We welcome researchers from academia and industry to contribute to advancing user simulation evaluation. Be part of shaping the future of system evaluation.