File size: 4,381 Bytes
2e71e29
 
 
3491c38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68cb012
3491c38
 
 
 
 
 
 
 
 
 
 
 
 
27c95e7
3491c38
 
 
 
a3e6a17
3491c38
 
 
 
27c95e7
3491c38
 
 
 
 
 
 
27c95e7
3491c38
 
 
 
 
 
 
 
 
 
 
0976041
 
1efe76f
 
0976041
 
 
b9c5365
0976041
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e4a937
0976041
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e4a937
0976041
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e4a937
0976041
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: llama2
---

## Installation from source

```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```


## Description

This model is intended to be used as an accelerator for [llama 13B (code)](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) and takes inspiration 
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts 
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator, 
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.

Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. 
Training is light-weight and can be completed in only a few days depending on base model size and speed.

## Repository Links

1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)

## Samples

_Note: For all samples, your environment must have access to cuda_

### Production Server Sample

*To try this out running in a production-like environment, please use the pre-built docker image:*

#### Setup

```bash
docker pull quay.io/wxpe/text-gen-server:main.ee927a4
docker run -d --rm --gpus all \
    --name my-tgis-server \
    -p 8033:8033 \
    -v /path/to/all/models:/models \
    -e MODEL_NAME=/models/model_weights/llama/CodeLlama-13b-Instruct-hf \
    -e SPECULATOR_NAME=/models/speculator_weights/llama/codellama-13b-accelerator \
    -e FLASH_ATTENTION=true \
    -e PAGED_ATTENTION=true \
    -e DTYPE_STR=float16 \
    quay.io/wxpe/text-gen-server:main.ee927a4

# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f

# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```

#### Run Sample

```bash
python sample_client.py
```

_Note: first prompt may be slower as there is a slight warmup time_

### Minimal Sample

#### Install

```bash
git clone -b code_llama_variant --single-branch https://github.com/JRosenkranz/fms-extras.git
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```

#### Run Sample

##### batch_size=1 (compile + cudagraphs)

```bash
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=13b_code \
    --model_path=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --model_source=hf \
    --tokenizer=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --speculator_path=ibm-fms/codellama-13b-accelerator \
    --speculator_source=hf \
    --top_k_tokens_per_head=4,3,2,2,2,2,2 \
    --prompt_type=code \
    --compile \
    --compile_mode=reduce-overhead
```

##### batch_size=1 (compile)

```bash
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=13b_code \
    --model_path=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --model_source=hf \
    --tokenizer=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --speculator_path=ibm-fms/codellama-13b-accelerator \
    --speculator_source=hf \
    --top_k_tokens_per_head=4,3,2,2,2,2,2 \
    --prompt_type=code \
    --compile \
```

##### batch_size=4 (compile)

```bash
python fms-extras/scripts/paged_speculative_inference.py \
    --variant=13b_code \
    --model_path=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --model_source=hf \
    --tokenizer=/path/to/llama/CodeLlama-13b-Instruct-hf \
    --speculator_path=ibm-fms/codellama-13b-accelerator \
    --speculator_source=hf \
    --batch_input \
    --top_k_tokens_per_head=4,3,2,2,2,2,2 \
    --prompt_type=code \
    --compile \
```

Sample code can be found [here](https://github.com/foundation-model-stack/fms-extras/pull/18)