Adding new projects
Browse files
README.md
CHANGED
|
@@ -12,15 +12,21 @@ Feel free to send in pull requests or use this code however you'd like.\
|
|
| 12 |
|
| 13 |
**For GitHub**: Would recommend creating pull requests and discussions on the [offical huggingface repo](https://huggingface.co/Anthonyg5005/hf-scripts)
|
| 14 |
|
| 15 |
-
## existing
|
| 16 |
|
| 17 |
- [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
|
| 18 |
|
|
|
|
|
|
|
| 19 |
## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
|
| 20 |
|
|
|
|
|
|
|
|
|
|
| 21 |
- Upload folder
|
|
|
|
| 22 |
|
| 23 |
-
## other recommended
|
| 24 |
|
| 25 |
- [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
|
| 26 |
|
|
@@ -28,6 +34,9 @@ Feel free to send in pull requests or use this code however you'd like.\
|
|
| 28 |
|
| 29 |
- Manage branches
|
| 30 |
- Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). May get some updates in the future for handling more situations. All active updates will be on the [unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch. Colab and Kaggle keys are supported.
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
- Download models
|
| 33 |
- Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
|
|
|
|
| 12 |
|
| 13 |
**For GitHub**: Would recommend creating pull requests and discussions on the [offical huggingface repo](https://huggingface.co/Anthonyg5005/hf-scripts)
|
| 14 |
|
| 15 |
+
## existing files
|
| 16 |
|
| 17 |
- [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
|
| 18 |
|
| 19 |
+
- [EXL2 Private Quant V1](https://colab.research.google.com/drive/1ssr_4iSHnfvusFLpJI-PyorzXuNpS5B5?usp=sharing) (wip, mostly working)
|
| 20 |
+
|
| 21 |
## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
|
| 22 |
|
| 23 |
+
- Auto exl2 upload
|
| 24 |
+
- Will create repo and have quants from 2-6 bpw (or custom) on individual branches
|
| 25 |
+
|
| 26 |
- Upload folder
|
| 27 |
+
- Will allow to upload a folder to existing or new repo
|
| 28 |
|
| 29 |
+
## other recommended files
|
| 30 |
|
| 31 |
- [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
|
| 32 |
|
|
|
|
| 34 |
|
| 35 |
- Manage branches
|
| 36 |
- Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). May get some updates in the future for handling more situations. All active updates will be on the [unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch. Colab and Kaggle keys are supported.
|
| 37 |
+
|
| 38 |
+
- EXL2 Private Quant
|
| 39 |
+
- Allows you to quantize to exl2 using colab. This version creates a exl2 quant to download privatly. Should work in any linux ipynb environment as long as CUDA is installed (mostly tested)
|
| 40 |
|
| 41 |
- Download models
|
| 42 |
- Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
|