|
--- |
|
license: cc-by-sa-4.0 |
|
metrics: |
|
- mse |
|
pipeline_tag: graph-ml |
|
--- |
|
|
|
# AIFS Single - v0.2.1 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
Here, we introduce the **Artificial Intelligence Forecasting System (AIFS)**, a data driven forecast |
|
model developed by the European Centre for Medium-Range Weather Forecasts (ECMWF). |
|
|
|
 |
|
|
|
We show that AIFS produces highly skilled forecasts for upper-air variables, surface weather parameters and |
|
tropical cyclone tracks. AIFS is run four times daily alongside ECMWF’s physics-based NWP model and forecasts |
|
are available to the public under ECMWF’s open data policy. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
AIFS is based on a graph neural network (GNN) encoder and decoder, and a sliding window transformer processor, |
|
and is trained on ECMWF’s ERA5 re-analysis and ECMWF’s operational numerical weather prediction (NWP) analyses. |
|
|
|
<div style="display: flex;"> |
|
<img src="encoder_graph.jpeg" alt="Encoder graph" style="width: 50%;"/> |
|
<img src="decoder_graph.jpeg" alt="Decoder graph" style="width: 50%;"/> |
|
</div> |
|
|
|
It has a flexible and modular design and supports several levels of parallelism to enable training on |
|
high resolution input data. AIFS forecast skill is assessed by comparing its forecasts to NWP analyses |
|
and direct observational data. |
|
|
|
- **Developed by:** ECMWF |
|
- **Model type:** Encoder-processor-decoder model |
|
- **License:** CC BY-SA 4.0 |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
|
|
- **Repository:** [Anemoi](https://anemoi-docs.readthedocs.io/en/latest/index.html) |
|
- **Paper:** https://arxiv.org/pdf/2406.01465 |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
``` |
|
export CONDA_ENV=aifs-env |
|
conda create -n ${CONDA_ENV} python=3.10 |
|
conda activate ${CONDA_ENV} |
|
|
|
pip install flash-attn |
|
pip install anemoi-inference[plugin] anemoi-models==0.2 |
|
|
|
ai-models anemoi --checkpoint aifs_single_v0.2.1.ckpt --file example_20241107_12_n320.grib |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
AIFS is trained to produce 6-hour forecasts. It receives as input a representation of the atmospheric states |
|
at \\(t_{−6h}\\), \\(t_{0}\\), and then forecasts the state at time \\(t_{+6h}\\). |
|
|
|
The full list of input and output fields is shown below: |
|
|
|
| Field | Level type | Input/Output | |
|
|-------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|--------------| |
|
| Geopotential, horizontal and vertical wind components, specific humidity, temperature | Pressure level: 50,100, 150, 200, 250,300, 400, 500, 600,700, 850, 925, 1000 | Both | |
|
| Surface pressure, mean sea-level pressure, skin temperature, 2 m temperature, 2 m dewpoint temperature, 10 m horizontal wind components, total column water | Surface | Both | |
|
| Total precipitation, convective precipitation | Surface | Output | |
|
| Land-sea mask, orography, standard deviation of sub-grid orography, slope of sub-scale orography, insolation, latitude/longitude, time of day/day of year | Surface | Input | |
|
|
|
Input and output states are normalised to unit variance and zero mean for each level. Some of |
|
the forcing variables, like orography, are min-max normalised. |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
- **Pre-training**: It was performed on ERA5 for the years 1979 to 2020 with a cosine learning rate (LR) schedule and a total |
|
of 260,000 steps. The LR is increased from 0 to \\(10^{-4}\\) during the first 1000 steps, then it is annealed to a minimum |
|
of \\(3 × 10^{-7}\\). |
|
- **Fine-tuning I**: The pre-training is then followed by rollout on ERA5 for the years 1979 to 2018, this time with a LR |
|
of \\(6 × 10^{-7}\\). As in [Lam et al. [2023]](https://doi.org/10.48550/arXiv.2212.12794) we increase the |
|
rollout every 1000 training steps up to a maximum of 72 h (12 auto-regressive steps). |
|
- **Fine-tuning II**: Finally, to further improve forecast performance, we fine-tune the model on operational real-time IFS NWP |
|
analyses. This is done via another round of rollout training, this time using IFS operational analysis data |
|
from 2019 and 2020 |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Optimizer:** We use *AdamW* (Loshchilov and Hutter [2019]) with the \\(β\\)-coefficients set to 0.9 and 0.95. |
|
|
|
- **Loss function:** The loss function is an area-weighted mean squared error (MSE) between the target atmospheric state |
|
and prediction. |
|
|
|
- **Loss scaling:** A loss scaling is applied for each output variable. The scaling was chosen empirically such that |
|
all prognostic variables have roughly equal contributions to the loss, with the exception of the vertical velocities, |
|
for which the weight was reduced. The loss weights also decrease linearly with height, which means that levels in |
|
the upper atmosphere (e.g., 50 hPa) contribute relatively little to the total loss value. |
|
|
|
#### Speeds, Sizes, Times |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
Data parallelism is used for training, with a batch size of 16. One model instance is split across four 40GB A100 |
|
GPUs within one node. Training is done using mixed precision (Micikevicius et al. [2018]), and the entire process |
|
takes about one week, with 64 GPUs in total. |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
{{ testing_data | default("[More Information Needed]", true)}} |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
{{ testing_factors | default("[More Information Needed]", true)}} |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
{{ testing_metrics | default("[More Information Needed]", true)}} |
|
|
|
### Results |
|
|
|
{{ results | default("[More Information Needed]", true)}} |
|
|
|
#### Summary |
|
|
|
{{ results_summary | default("", true) }} |
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
{{ model_examination | default("[More Information Needed]", true)}} |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** {{ hardware_type | default("[More Information Needed]", true)}} |
|
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}} |
|
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}} |
|
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}} |
|
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}} |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
{{ model_specs | default("[More Information Needed]", true)}} |
|
|
|
### Compute Infrastructure |
|
|
|
{{ compute_infrastructure | default("[More Information Needed]", true)}} |
|
|
|
#### Hardware |
|
|
|
{{ hardware_requirements | default("[More Information Needed]", true)}} |
|
|
|
We acknowledge PRACE for awarding us access to Leonardo, CINECA, Italy |
|
|
|
|
|
#### Software |
|
|
|
{{ software | default("[More Information Needed]", true)}} |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
If you use this model in your work, please cite it as follows: |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@article{lang2024aifs, |
|
title={AIFS-ECMWF's data-driven forecasting system}, |
|
author={Lang, Simon and Alexe, Mihai and Chantry, Matthew and Dramsch, Jesper and Pinault, Florian and Raoult, Baudouin and Clare, Mariana CA and Lessig, Christian and Maier-Gerber, Michael and Magnusson, Linus and others}, |
|
journal={arXiv preprint arXiv:2406.01465}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
**APA:** |
|
|
|
``` |
|
Lang, S., Alexe, M., Chantry, M., Dramsch, J., Pinault, F., Raoult, B., ... & Rabier, F. (2024). AIFS-ECMWF's data-driven forecasting system. arXiv preprint arXiv:2406.01465. |
|
``` |
|
|
|
|
|
## More Information |
|
|
|
[More Information Needed](https://arxiv.org/pdf/2406.01465) |
|
|
|
|