Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,6 @@ tags:
|
|
| 9 |
- time-series
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
# Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
|
| 14 |
|
| 15 |

|
|
@@ -18,19 +17,16 @@ Lag-Llama is the <b>first open-source foundation model for time series forecasti
|
|
| 18 |
|
| 19 |
[[Tweet Thread](https://twitter.com/arjunashok37/status/1755261111233114165)]
|
| 20 |
|
| 21 |
-
[[
|
| 22 |
|
| 23 |
[[Paper](https://arxiv.org/abs/2310.08278)]
|
| 24 |
|
| 25 |
[[Video](https://www.youtube.com/watch?v=Mf2FOzDPxck)]
|
| 26 |
-
|
| 27 |
-
____
|
| 28 |
-
This HuggingFace model houses the <a href="https://huggingface.co/time-series-foundation-models/Lag-Llama/blob/main/lag-llama.ckpt" target="_blank">pretrained checkpoint</a> of Lag-Llama.
|
| 29 |
-
|
| 30 |
____
|
| 31 |
|
| 32 |
<b>Updates</b>:
|
| 33 |
|
|
|
|
| 34 |
* **9-Apr-2024**: We have released a 15-minute video 🎥 on Lag-Llama on [YouTube](https://www.youtube.com/watch?v=Mf2FOzDPxck).
|
| 35 |
* **5-Apr-2024**: Added a [section](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?authuser=1#scrollTo=Mj9LXMpJ01d7&line=6&uniqifier=1) in Colab Demo 1 on the importance of tuning the context length for zero-shot forecasting. Added a [best practices section](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#best-practices) in the README; added recommendations for finetuning. These recommendations will be demonstrated with an example in [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) soon.
|
| 36 |
* **4-Apr-2024**: We have updated our requirements file with new versions of certain packages. Please update/recreate your environments if you have previously used the code locally.
|
|
@@ -40,30 +36,22 @@ ____
|
|
| 40 |
|
| 41 |
____
|
| 42 |
|
| 43 |
-
Current Features
|
| 44 |
|
| 45 |
💫 <b>Zero-shot forecasting</b> on a dataset of <b>any frequency</b> for <b>any prediction length</b>, using <a href="https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?usp=sharing" target="_blank">Colab Demo 1.</a><br/>
|
| 46 |
|
| 47 |
💫 <b>Finetuning</b> on a dataset using [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing).
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
Coming Soon:
|
| 55 |
-
|
| 56 |
-
⭐ Scripts to pretrain Lag-Llama on your own large-scale data
|
| 57 |
-
|
| 58 |
-
⭐ Scripts to <b>reproduce</b> all results in the paper.
|
| 59 |
|
| 60 |
____
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
|
| 66 |
-
____
|
| 67 |
|
| 68 |
## Best Practices
|
| 69 |
|
|
|
|
| 9 |
- time-series
|
| 10 |
---
|
| 11 |
|
|
|
|
| 12 |
# Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
|
| 13 |
|
| 14 |

|
|
|
|
| 17 |
|
| 18 |
[[Tweet Thread](https://twitter.com/arjunashok37/status/1755261111233114165)]
|
| 19 |
|
| 20 |
+
[[Model Weights](https://huggingface.co/time-series-foundation-models/Lag-Llama)] [[Colab Demo 1: Zero-Shot Forecasting](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?usp=sharing)] [[Colab Demo 2: (Preliminary Finetuning)](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing)]
|
| 21 |
|
| 22 |
[[Paper](https://arxiv.org/abs/2310.08278)]
|
| 23 |
|
| 24 |
[[Video](https://www.youtube.com/watch?v=Mf2FOzDPxck)]
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
____
|
| 26 |
|
| 27 |
<b>Updates</b>:
|
| 28 |
|
| 29 |
+
* **16-Apr-2024**: Released pretraining and finetuning scripts to replicate the experiments in the paper. See [Reproducing Experiments in the Paper](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#reproducing-experiments-in-the-paper) for details.
|
| 30 |
* **9-Apr-2024**: We have released a 15-minute video 🎥 on Lag-Llama on [YouTube](https://www.youtube.com/watch?v=Mf2FOzDPxck).
|
| 31 |
* **5-Apr-2024**: Added a [section](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?authuser=1#scrollTo=Mj9LXMpJ01d7&line=6&uniqifier=1) in Colab Demo 1 on the importance of tuning the context length for zero-shot forecasting. Added a [best practices section](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#best-practices) in the README; added recommendations for finetuning. These recommendations will be demonstrated with an example in [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) soon.
|
| 32 |
* **4-Apr-2024**: We have updated our requirements file with new versions of certain packages. Please update/recreate your environments if you have previously used the code locally.
|
|
|
|
| 36 |
|
| 37 |
____
|
| 38 |
|
| 39 |
+
**Current Features**:
|
| 40 |
|
| 41 |
💫 <b>Zero-shot forecasting</b> on a dataset of <b>any frequency</b> for <b>any prediction length</b>, using <a href="https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?usp=sharing" target="_blank">Colab Demo 1.</a><br/>
|
| 42 |
|
| 43 |
💫 <b>Finetuning</b> on a dataset using [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing).
|
| 44 |
|
| 45 |
+
💫 <b>Reproducing</b> experiments in the paper using the released scripts. See [Reproducing Experiments in the Paper](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#reproducing-experiments-in-the-paper) for details.
|
|
|
|
| 46 |
|
| 47 |
+
**Note**: Please see the [best practices section](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#best-practices) when using the model for zero-shot prediction and finetuning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
____
|
| 50 |
|
| 51 |
+
## Reproducing Experiments in the Paper
|
| 52 |
|
| 53 |
+
To replicate the pretraining setup used in the paper, please see [the pretraining script](scripts/pretrain.sh). Once a model is pretrained, instructions to finetune it with the setup in the paper can be found in [the finetuning script](scripts/finetune.sh).
|
| 54 |
|
|
|
|
| 55 |
|
| 56 |
## Best Practices
|
| 57 |
|