|
|
--- |
|
|
license: cc-by-sa-4.0 |
|
|
metrics: |
|
|
- mse |
|
|
pipeline_tag: graph-ml |
|
|
--- |
|
|
|
|
|
# AIFS Single - v0.2.1 |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
Here, we introduce the **Artificial Intelligence Forecasting System (AIFS)**, a data driven forecast |
|
|
model developed by the European Centre for Medium-Range Weather Forecasts (ECMWF). |
|
|
|
|
|
 |
|
|
|
|
|
We show that AIFS produces highly skilled forecasts for upper-air variables, surface weather parameters and |
|
|
tropical cyclone tracks. AIFS is run four times daily alongside ECMWF’s physics-based NWP model and forecasts |
|
|
are available to the public under ECMWF’s open data policy. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
AIFS is based on a graph neural network (GNN) encoder and decoder, and a sliding window transformer processor, |
|
|
and is trained on ECMWF’s ERA5 re-analysis and ECMWF’s operational numerical weather prediction (NWP) analyses. |
|
|
|
|
|
<div style="display: flex;"> |
|
|
<img src="encoder_graph.jpeg" alt="Encoder graph" style="width: 50%;"/> |
|
|
<img src="decoder_graph.jpeg" alt="Decoder graph" style="width: 50%;"/> |
|
|
</div> |
|
|
|
|
|
It has a flexible and modular design and supports several levels of parallelism to enable training on |
|
|
high resolution input data. AIFS forecast skill is assessed by comparing its forecasts to NWP analyses |
|
|
and direct observational data. |
|
|
|
|
|
- **Developed by:** ECMWF |
|
|
- **Model type:** Encoder-processor-decoder model |
|
|
- **License:** CC BY-SA 4.0 |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
|
|
|
|
|
- **Repository:** [Anemoi](https://anemoi-docs.readthedocs.io/en/latest/index.html) |
|
|
- **Paper:** https://arxiv.org/pdf/2406.01465 |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
Use the code below to get started with the model. |
|
|
|
|
|
``` |
|
|
export CONDA_ENV=aifs-env |
|
|
conda create -n ${CONDA_ENV} python=3.10 |
|
|
conda activate ${CONDA_ENV} |
|
|
|
|
|
pip install torch=2.4 |
|
|
pip install anemoi-inference[plugin] anemoi-models==0.2 |
|
|
pip install ninja |
|
|
pip install flash-attn --no-build-isolation |
|
|
|
|
|
ai-models anemoi --checkpoint aifs_single_v0.2.1.ckpt --file example_20241107_12_n320.grib |
|
|
``` |
|
|
! ISSUE WITH PYTORCH 2.5 and flash_attention, and cuda 12.4. For now keep it to torch 2.4 |
|
|
|
|
|
The above command will write the forecast results to anemoi.grib |
|
|
#missing - example how to plot/open the anemoi.grib |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
|
|
AIFS is trained to produce 6-hour forecasts. It receives as input a representation of the atmospheric states |
|
|
at \\(t_{−6h}\\), \\(t_{0}\\), and then forecasts the state at time \\(t_{+6h}\\). |
|
|
|
|
|
The full list of input and output fields is shown below: |
|
|
|
|
|
| Field | Level type | Input/Output | |
|
|
|-------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|--------------| |
|
|
| Geopotential, horizontal and vertical wind components, specific humidity, temperature | Pressure level: 50,100, 150, 200, 250,300, 400, 500, 600,700, 850, 925, 1000 | Both | |
|
|
| Surface pressure, mean sea-level pressure, skin temperature, 2 m temperature, 2 m dewpoint temperature, 10 m horizontal wind components, total column water | Surface | Both | |
|
|
| Total precipitation, convective precipitation | Surface | Output | |
|
|
| Land-sea mask, orography, standard deviation of sub-grid orography, slope of sub-scale orography, insolation, latitude/longitude, time of day/day of year | Surface | Input | |
|
|
|
|
|
Input and output states are normalised to unit variance and zero mean for each level. Some of |
|
|
the forcing variables, like orography, are min-max normalised. |
|
|
|
|
|
### Training Procedure |
|
|
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
- **Pre-training**: It was performed on ERA5 for the years 1979 to 2020 with a cosine learning rate (LR) schedule and a total |
|
|
of 260,000 steps. The LR is increased from 0 to \\(10^{-4}\\) during the first 1000 steps, then it is annealed to a minimum |
|
|
of \\(3 × 10^{-7}\\). |
|
|
- **Fine-tuning I**: The pre-training is then followed by rollout on ERA5 for the years 1979 to 2018, this time with a LR |
|
|
of \\(6 × 10^{-7}\\). As in [Lam et al. [2023]](https://doi.org/10.48550/arXiv.2212.12794) we increase the |
|
|
rollout every 1000 training steps up to a maximum of 72 h (12 auto-regressive steps). |
|
|
- **Fine-tuning II**: Finally, to further improve forecast performance, we fine-tune the model on operational real-time IFS NWP |
|
|
analyses. This is done via another round of rollout training, this time using IFS operational analysis data |
|
|
from 2019 and 2020 |
|
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
|
|
- **Optimizer:** We use *AdamW* (Loshchilov and Hutter [2019]) with the \\(β\\)-coefficients set to 0.9 and 0.95. |
|
|
|
|
|
- **Loss function:** The loss function is an area-weighted mean squared error (MSE) between the target atmospheric state |
|
|
and prediction. |
|
|
|
|
|
- **Loss scaling:** A loss scaling is applied for each output variable. The scaling was chosen empirically such that |
|
|
all prognostic variables have roughly equal contributions to the loss, with the exception of the vertical velocities, |
|
|
for which the weight was reduced. The loss weights also decrease linearly with height, which means that levels in |
|
|
the upper atmosphere (e.g., 50 hPa) contribute relatively little to the total loss value. |
|
|
|
|
|
#### Speeds, Sizes, Times |
|
|
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
|
|
Data parallelism is used for training, with a batch size of 16. One model instance is split across four 40GB A100 |
|
|
GPUs within one node. Training is done using mixed precision (Micikevicius et al. [2018]), and the entire process |
|
|
takes about one week, with 64 GPUs in total. The checkpoint size is 1.19 GB and it does not include the optimizer |
|
|
state. |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
|
|
#### Testing Data |
|
|
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
|
|
{{ testing_data | default("[More Information Needed]", true)}} |
|
|
|
|
|
#### Factors |
|
|
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
|
|
{{ testing_factors | default("[More Information Needed]", true)}} |
|
|
|
|
|
#### Metrics |
|
|
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
|
|
{{ testing_metrics | default("[More Information Needed]", true)}} |
|
|
|
|
|
### Results |
|
|
|
|
|
{{ results | default("[More Information Needed]", true)}} |
|
|
|
|
|
#### Summary |
|
|
|
|
|
{{ results_summary | default("", true) }} |
|
|
|
|
|
## Model Examination [optional] |
|
|
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
|
|
{{ model_examination | default("[More Information Needed]", true)}} |
|
|
|
|
|
## Technical Specifications |
|
|
|
|
|
### Hardware |
|
|
|
|
|
<!-- {{ hardware_requirements | default("[More Information Needed]", true)}} --> |
|
|
|
|
|
We acknowledge PRACE for awarding us access to Leonardo, CINECA, Italy. In particular, this AIFS version has been trained |
|
|
over 64 A100 GPUs (40GB). |
|
|
|
|
|
### Software |
|
|
|
|
|
The model was developed and trained using the [AnemoI framework](https://anemoi-docs.readthedocs.io/en/latest/index.html). |
|
|
AnemoI is a framework for developing machine learning weather forecasting models. It comprises of components or packages |
|
|
for preparing training datasets, conducting ML model training and a registry for datasets and trained models. AnemoI |
|
|
provides tools for operational inference, including interfacing to verification software. As a framework it seeks to |
|
|
handle many of the complexities that meteorological organisations will share, allowing them to easily train models from |
|
|
existing recipes but with their own data. |
|
|
|
|
|
## Citation |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
If you use this model in your work, please cite it as follows: |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@article{lang2024aifs, |
|
|
title={AIFS-ECMWF's data-driven forecasting system}, |
|
|
author={Lang, Simon and Alexe, Mihai and Chantry, Matthew and Dramsch, Jesper and Pinault, Florian and Raoult, Baudouin and Clare, Mariana CA and Lessig, Christian and Maier-Gerber, Michael and Magnusson, Linus and others}, |
|
|
journal={arXiv preprint arXiv:2406.01465}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
``` |
|
|
Lang, S., Alexe, M., Chantry, M., Dramsch, J., Pinault, F., Raoult, B., ... & Rabier, F. (2024). AIFS-ECMWF's data-driven forecasting system. arXiv preprint arXiv:2406.01465. |
|
|
``` |
|
|
|
|
|
|
|
|
## More Information |
|
|
|
|
|
[More Information Needed](https://arxiv.org/pdf/2406.01465) |
|
|
|
|
|
|