RespiCompass is ECDC’s scenario modelling hub for viral respiratory diseases. Unlike forecasts, which provide short-term predictions based on trends in observed data, scenario modelling explores mid- to long-term projections that incorporate specific uncertainties. These scenarios are designed to support policymakers by providing insights into the potential impact of public health interventions. RespiCompass brings together multiple modelling groups working collaboratively, with input from disease experts in respiratory virus surveillance. We apply ensemble approaches that combine projections from different modelling groups using a unified set of scenario parameters. This approach allows us to account for uncertainties across diverse modelling methodologies, embedding these uncertainties directly within the ensemble results.


For the 2024-2025 Scenario Round 1, the objectives were threefold: (1) to anticipate--to the extent possible--the burden of COVID-19 and influenza in the EU/EEA as measured by selected surveillance indicators for individuals aged 65 years and older under pre-defined scenarios, (2) to estimate the impact of vaccination campaigns on these burden indicators in the same age group, and (3) to foster collaboration among modellers, policymakers, and advisors across the EU/EEA, building capacity and bringing modelling insights closer to practical decision-making. Descriptions of influenza and COVID-19 scenarios, along with assumptions shared across modelling teams, can be found under Insights.
Our scenario modelling hub operates on yearly cycles. First, policy-relevant questions are developed in collaboration with disease experts, and scenarios are designed to address these questions. Scenarios are then shared with participating modelling teams, who work on their own models and submit outputs in a pre-agreed format. In the next step, a model ensemble is created by combining outputs from all submitted models. All results are summarised and reported through various communication channels, including oral presentations to target audiences and a web report. The final step in the cycle is an evaluation of the project up to this point to assess the overall impact and identify areas for improvement.


Model ensemble methodology
Aggregating outputs from multiple models yields more robust predictions of outcomes and associated uncertainty [Howerton et al., 2023] . We used the linear opinion pool (LOP) method to aggregate model projections. Model aggregates can be sensitive to outlying predictions. Here, we used robust trimming to address this concern [Howerton et al., 2023] . This trimming method can create biased results when between-model variability is high and/or the number of contributing models is low.
For cases where we combined ensemble estimates from more than one scenario, we first obtain the model samples from all relevant scenarios (while for a single scenario analysis we include only that single scenario), then pool the samples together, and finally create the ensemble following the LOP and trimming method outlined above.