Datasets:

ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

TemporalBench

Overview

TemporalBench is a multi-domain benchmark for evaluating the temporal understanding and reasoning capabilities of large language models (LLMs) and agent-based systems over real numerical time-series.

Unlike traditional benchmarks that focus primarily on forecasting accuracy, TemporalBench is designed to diagnose how models interpret temporal structure, ground temporal patterns in context, and reason about future behavior under explicit events. To this end, the benchmark decomposes temporal intelligence into four complementary task families (T1–T4), each targeting a distinct temporal competency.

The benchmark spans four real-world domains—retail, healthcare, energy, and physical systems—and supports both multiple-choice reasoning tasks and numerical forecasting objectives.

The paper describing this benchmark is TemporalBench: A Benchmark for Evaluating LLM-Based Agents on Contextual and Event-Informed Time Series Tasks (https://arxiv.org/abs/2602.13272). We also maintain a public leaderboard and welcome submissions from state-of-the-art models: https://huggingface.co/spaces/Melady/TemporalBench_Leaderboard


Task Design

TemporalBench organizes evaluation tasks into four task families:

  • T1 – Historical Time-Series Understanding
    Interpretation of intrinsic temporal properties such as trends, volatility, seasonality, and anomalies.

  • T2 – Context-Free Future Prediction
    Prediction of future behavior based solely on historical temporal signals, using numerical forecasts and qualitative judgments.

  • T3 – Contextual Temporal Reasoning
    Reasoning over historical time-series grounded in domain-specific textual context.

  • T4 – Event-Informed Prediction
    Conditional and counterfactual reasoning about how future temporal behavior changes under explicitly specified events.

Each task family isolates a distinct temporal competency rather than forming an increasing-difficulty hierarchy.


Data Sources and Scope

TemporalBench is derived from existing real-world time-series datasets across four domains:

This dataset does not redistribute any raw data from the above sources.
Only derived task instances, annotations, prompts, and evaluation metadata are released.

In particular, no raw MIMIC-IV data, patient records, or identifiers are included.
Users must obtain access to the original datasets independently and comply with their respective licenses and data use agreements.


Annotations and Ground Truth

All ground truth labels are generated automatically using unified, rule-based procedures operating on historical and future time-series segments.

Key properties of the annotation process include:

  • No manual annotation
  • No model-in-the-loop labeling
  • Ground truth computation independent of contextual descriptions and event narratives
  • Explicit handling of uncertainty when signals are weak or ambiguous

Dataset Files

TemporalBench provides all benchmark resources in a set of structured JSONL files and utilities, organized into two non-overlapping parts.

1. Main Benchmark Set (with labels)

  • task_merged_dev_with_labels.jsonl
  • task_merged_dev_with_labels_tiers.jsonl

This is the primary benchmark set. Each task instance includes the corresponding ground-truth answers or labels, enabling researchers to compute evaluation metrics locally.

Results reported on the leaderboard should be computed on this split.


2. Evaluation Set (without labels)

  • task_merged_no_labels.jsonl
  • task_merged_no_labels_tiers.jsonl

This split does not include answers or labels and is intended for blind evaluation. It is a smaller dataset and does not overlap with the labeled benchmark set.

To submit results to the benchmark, participants are expected to:

  1. Report metrics computed on the labeled benchmark set
  2. Submit model predictions for this unlabeled split

We will use these predictions for verification and leaderboard maintenance.


3. Evaluation Utilities

  • forecast_metrics_utils.py

We provide this utility file as a reference implementation for computing forecasting metrics used in TemporalBench. Researchers are encouraged to use or adapt these functions to ensure consistent evaluation.

Intended Use

TemporalBench is intended for:

  • Benchmarking LLMs and agent frameworks on time-series understanding and reasoning
  • Diagnostic evaluation of contextual and event-aware temporal reasoning
  • Comparative analysis of agent designs beyond numerical forecasting accuracy

Data Example

Figure 3: Example from the PSML dataset, where a simulated event (heatwave) is used to define the history–future split. The figure shows the historical series, the simulated future, the ground-truth future, and the corresponding T1–T4 task formulations.

License

This dataset is released under the Apache License 2.0.


Contribution

We would love to hear from the broader Machine Learning research communities, and we welcome any contributions, pull requests or issues! To do so, please file a new pull request or issue. We'll be sure to follow up shortly!

Contact person: Muyan Weng (muyanwen@usc.edu).


Citation

If you use TemporalBench in your work, please cite:

@misc{weng2026temporalbenchbenchmarkevaluatingllmbased,
      title={TemporalBench: A Benchmark for Evaluating LLM-Based Agents on Contextual and Event-Informed Time Series Tasks}, 
      author={Muyan Weng and Defu Cao and Wei Yang and Yashaswi Sharma and Yan Liu},
      year={2026},
      eprint={2602.13272},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2602.13272}, 
}
Downloads last month
28

Space using Melady/TemporalBench 1

Paper for Melady/TemporalBench

Melady/TemporalBench · Datasets at Hugging Face

Datasets:

ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

TemporalBench

Overview

TemporalBench is a multi-domain benchmark for evaluating the temporal understanding and reasoning capabilities of large language models (LLMs) and agent-based systems over real numerical time-series.

Unlike traditional benchmarks that focus primarily on forecasting accuracy, TemporalBench is designed to diagnose how models interpret temporal structure, ground temporal patterns in context, and reason about future behavior under explicit events. To this end, the benchmark decomposes temporal intelligence into four complementary task families (T1–T4), each targeting a distinct temporal competency.

The benchmark spans four real-world domains—retail, healthcare, energy, and physical systems—and supports both multiple-choice reasoning tasks and numerical forecasting objectives.

The paper describing this benchmark is TemporalBench: A Benchmark for Evaluating LLM-Based Agents on Contextual and Event-Informed Time Series Tasks (https://arxiv.org/abs/2602.13272). We also maintain a public leaderboard and welcome submissions from state-of-the-art models: https://huggingface.co/spaces/Melady/TemporalBench_Leaderboard


Task Design

TemporalBench organizes evaluation tasks into four task families:

  • T1 – Historical Time-Series Understanding
    Interpretation of intrinsic temporal properties such as trends, volatility, seasonality, and anomalies.

  • T2 – Context-Free Future Prediction
    Prediction of future behavior based solely on historical temporal signals, using numerical forecasts and qualitative judgments.

  • T3 – Contextual Temporal Reasoning
    Reasoning over historical time-series grounded in domain-specific textual context.

  • T4 – Event-Informed Prediction
    Conditional and counterfactual reasoning about how future temporal behavior changes under explicitly specified events.

Each task family isolates a distinct temporal competency rather than forming an increasing-difficulty hierarchy.


Data Sources and Scope

TemporalBench is derived from existing real-world time-series datasets across four domains:

This dataset does not redistribute any raw data from the above sources.
Only derived task instances, annotations, prompts, and evaluation metadata are released.

In particular, no raw MIMIC-IV data, patient records, or identifiers are included.
Users must obtain access to the original datasets independently and comply with their respective licenses and data use agreements.


Annotations and Ground Truth

All ground truth labels are generated automatically using unified, rule-based procedures operating on historical and future time-series segments.

Key properties of the annotation process include:

  • No manual annotation
  • No model-in-the-loop labeling
  • Ground truth computation independent of contextual descriptions and event narratives
  • Explicit handling of uncertainty when signals are weak or ambiguous

Dataset Files

TemporalBench provides all benchmark resources in a set of structured JSONL files and utilities, organized into two non-overlapping parts.

1. Main Benchmark Set (with labels)

  • task_merged_dev_with_labels.jsonl
  • task_merged_dev_with_labels_tiers.jsonl

This is the primary benchmark set. Each task instance includes the corresponding ground-truth answers or labels, enabling researchers to compute evaluation metrics locally.

Results reported on the leaderboard should be computed on this split.


2. Evaluation Set (without labels)

  • task_merged_no_labels.jsonl
  • task_merged_no_labels_tiers.jsonl

This split does not include answers or labels and is intended for blind evaluation. It is a smaller dataset and does not overlap with the labeled benchmark set.

To submit results to the benchmark, participants are expected to:

  1. Report metrics computed on the labeled benchmark set
  2. Submit model predictions for this unlabeled split

We will use these predictions for verification and leaderboard maintenance.


3. Evaluation Utilities

  • forecast_metrics_utils.py

We provide this utility file as a reference implementation for computing forecasting metrics used in TemporalBench. Researchers are encouraged to use or adapt these functions to ensure consistent evaluation.

Intended Use

TemporalBench is intended for:

  • Benchmarking LLMs and agent frameworks on time-series understanding and reasoning
  • Diagnostic evaluation of contextual and event-aware temporal reasoning
  • Comparative analysis of agent designs beyond numerical forecasting accuracy

Data Example

Figure 3: Example from the PSML dataset, where a simulated event (heatwave) is used to define the history–future split. The figure shows the historical series, the simulated future, the ground-truth future, and the corresponding T1–T4 task formulations.

License

This dataset is released under the Apache License 2.0.


Contribution

We would love to hear from the broader Machine Learning research communities, and we welcome any contributions, pull requests or issues! To do so, please file a new pull request or issue. We'll be sure to follow up shortly!

Contact person: Muyan Weng (muyanwen@usc.edu).


Citation

If you use TemporalBench in your work, please cite:

@misc{weng2026temporalbenchbenchmarkevaluatingllmbased,
      title={TemporalBench: A Benchmark for Evaluating LLM-Based Agents on Contextual and Event-Informed Time Series Tasks}, 
      author={Muyan Weng and Defu Cao and Wei Yang and Yashaswi Sharma and Yan Liu},
      year={2026},
      eprint={2602.13272},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2602.13272}, 
}
Downloads last month
28

Space using Melady/TemporalBench 1

Paper for Melady/TemporalBench