Move to in-library checkpoint
Browse files
README.md
CHANGED
|
@@ -27,9 +27,9 @@ For full details of this model please read the [white paper](https://arxiv.org/a
|
|
| 27 |
|
| 28 |
## Usage
|
| 29 |
### Presequities
|
| 30 |
-
Jamba
|
| 31 |
```bash
|
| 32 |
-
pip install transformers>=4.
|
| 33 |
```
|
| 34 |
|
| 35 |
In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`:
|
|
@@ -41,12 +41,10 @@ You also have to have the model on a CUDA device.
|
|
| 41 |
You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
|
| 42 |
|
| 43 |
### Run the model
|
| 44 |
-
Please note that, at the moment, `trust_remote_code=True` is required for running the new Jamba architecture.
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
|
| 48 |
-
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1"
|
| 49 |
-
trust_remote_code=True)
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1")
|
| 51 |
|
| 52 |
input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"]
|
|
@@ -57,6 +55,8 @@ print(tokenizer.batch_decode(outputs))
|
|
| 57 |
# ["<|startoftext|>In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"]
|
| 58 |
```
|
| 59 |
|
|
|
|
|
|
|
| 60 |
<details>
|
| 61 |
<summary><strong>Loading the model in half precision</strong></summary>
|
| 62 |
|
|
@@ -66,7 +66,6 @@ print(tokenizer.batch_decode(outputs))
|
|
| 66 |
from transformers import AutoModelForCausalLM
|
| 67 |
import torch
|
| 68 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
| 69 |
-
trust_remote_code=True,
|
| 70 |
torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16
|
| 71 |
```
|
| 72 |
|
|
@@ -75,7 +74,6 @@ When using half precision, you can enable the [FlashAttention2](https://github.c
|
|
| 75 |
from transformers import AutoModelForCausalLM
|
| 76 |
import torch
|
| 77 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
| 78 |
-
trust_remote_code=True,
|
| 79 |
torch_dtype=torch.bfloat16,
|
| 80 |
attn_implementation="flash_attention_2",
|
| 81 |
device_map="auto")
|
|
@@ -91,7 +89,6 @@ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
|
|
| 91 |
quantization_config = BitsAndBytesConfig(load_in_8bit=True,
|
| 92 |
llm_int8_skip_modules=["mamba"])
|
| 93 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
| 94 |
-
trust_remote_code=True,
|
| 95 |
torch_dtype=torch.bfloat16,
|
| 96 |
attn_implementation="flash_attention_2",
|
| 97 |
quantization_config=quantization_config)
|
|
@@ -108,7 +105,7 @@ from peft import LoraConfig
|
|
| 108 |
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
|
| 109 |
|
| 110 |
tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1")
|
| 111 |
-
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
| 112 |
|
| 113 |
dataset = load_dataset("Abirate/english_quotes", split="train")
|
| 114 |
training_args = TrainingArguments(
|
|
|
|
| 27 |
|
| 28 |
## Usage
|
| 29 |
### Presequities
|
| 30 |
+
In order to use Jamba, it is recommended you use `transformers` version 4.40.0 or higher (usage of version 4.39.0 is required):
|
| 31 |
```bash
|
| 32 |
+
pip install transformers>=4.40.0
|
| 33 |
```
|
| 34 |
|
| 35 |
In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`:
|
|
|
|
| 41 |
You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
|
| 42 |
|
| 43 |
### Run the model
|
|
|
|
| 44 |
```python
|
| 45 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 46 |
|
| 47 |
+
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1")
|
|
|
|
| 48 |
tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1")
|
| 49 |
|
| 50 |
input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"]
|
|
|
|
| 55 |
# ["<|startoftext|>In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"]
|
| 56 |
```
|
| 57 |
|
| 58 |
+
Please note that if you're using `transformers<4.40.0`, `trust_remote_code=True` is required for running the new Jamba architecture.
|
| 59 |
+
|
| 60 |
<details>
|
| 61 |
<summary><strong>Loading the model in half precision</strong></summary>
|
| 62 |
|
|
|
|
| 66 |
from transformers import AutoModelForCausalLM
|
| 67 |
import torch
|
| 68 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
|
|
|
| 69 |
torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16
|
| 70 |
```
|
| 71 |
|
|
|
|
| 74 |
from transformers import AutoModelForCausalLM
|
| 75 |
import torch
|
| 76 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
|
|
|
| 77 |
torch_dtype=torch.bfloat16,
|
| 78 |
attn_implementation="flash_attention_2",
|
| 79 |
device_map="auto")
|
|
|
|
| 89 |
quantization_config = BitsAndBytesConfig(load_in_8bit=True,
|
| 90 |
llm_int8_skip_modules=["mamba"])
|
| 91 |
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
|
|
|
|
| 92 |
torch_dtype=torch.bfloat16,
|
| 93 |
attn_implementation="flash_attention_2",
|
| 94 |
quantization_config=quantization_config)
|
|
|
|
| 105 |
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
|
| 106 |
|
| 107 |
tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1")
|
| 108 |
+
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1", device_map='auto')
|
| 109 |
|
| 110 |
dataset = load_dataset("Abirate/english_quotes", split="train")
|
| 111 |
training_args = TrainingArguments(
|