JRosenkranz commited on
Commit
3491c38
·
verified ·
1 Parent(s): b2ade88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md CHANGED
@@ -1,3 +1,130 @@
1
  ---
2
  license: llama2
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
  ---
4
+
5
+ ## Installation from source
6
+
7
+ ```bash
8
+ git clone https://github.com/foundation-model-stack/fms-extras
9
+ cd fms-extras
10
+ pip install -e .
11
+ ```
12
+
13
+
14
+ ## Description
15
+
16
+ This model is intended to be used as an accelerator for [llama 13B (code)](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) and takes inspiration
17
+ from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
18
+ a single token in the draft based on both a state vector and sampled token
19
+ from the prior stage (the base model can be considered stage 0).
20
+ The state vector from the base model provides contextual information to the accelerator,
21
+ while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
22
+
23
+ Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
24
+ Training is light-weight and can be completed in only a few days depending on base model size and speed.
25
+
26
+ ## Repository Links
27
+
28
+ 1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
29
+ 2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference/pull/78)
30
+ 3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
31
+
32
+ ## Samples
33
+
34
+ _Note: For all samples, your environment must have access to cuda_
35
+
36
+ ### Production Server Sample
37
+
38
+ *To try this out running in a production-like environment, please use the pre-built docker image:*
39
+
40
+ #### Setup
41
+
42
+ ```bash
43
+ docker pull quay.io/wxpe/text-gen-server:speculative-decoding.ecd73c4
44
+ docker run -d --rm --gpus all \
45
+ --name my-tgis-server \
46
+ -p 8033:8033 \
47
+ -v /path/to/all/models:/models \
48
+ -e MODEL_NAME=/models/model_weights/llama/codellama-13B-F \
49
+ -e SPECULATOR_NAME=/models/speculator_weights/llama/codellama-13b-accelerator \
50
+ -e FLASH_ATTENTION=true \
51
+ -e PAGED_ATTENTION=true \
52
+ -e DTYPE_STR=float16 \
53
+ quay.io/wxpe/text-gen-server:speculative-decoding.ecd73c4
54
+
55
+ # check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
56
+ docker logs my-tgis-server -f
57
+
58
+ # get the client sample (Note: The first prompt will take longer as there is a warmup time)
59
+ conda create -n tgis-client-env python=3.11
60
+ conda activate tgis-client-env
61
+ git clone --branch speculative-decoding --single-branch https://github.com/tdoublep/text-generation-inference.git
62
+ cd text-generation-inference/integration_tests
63
+ make gen-client
64
+ pip install . --no-cache-dir
65
+ ```
66
+
67
+ #### Run Sample
68
+
69
+ ```bash
70
+ python sample_client.py
71
+ ```
72
+
73
+ _Note: first prompt may be slower as there is a slight warmup time_
74
+
75
+ ### Minimal Sample
76
+
77
+ *To try this out with the fms-native compiled model, please execute the following:*
78
+
79
+ #### Install
80
+
81
+ ```bash
82
+ git clone https://github.com/foundation-model-stack/fms-extras
83
+ (cd fms-extras && pip install -e .)
84
+ pip install transformers==4.35.0 sentencepiece numpy
85
+ ```
86
+
87
+ #### Run Sample
88
+
89
+ ##### batch_size=1 (compile + cudagraphs)
90
+
91
+ ```bash
92
+ python fms-extras/scripts/paged_speculative_inference.py \
93
+ --variant=13b \
94
+ --model_path=/path/to/model_weights/llama/codellama-13B-F \
95
+ --model_source=hf \
96
+ --tokenizer=/path/to/llama/13B-F \
97
+ --speculator_path=ibm-fms/codellama-13b-accelerator \
98
+ --speculator_source=hf \
99
+ --compile \
100
+ --compile_mode=reduce-overhead
101
+ ```
102
+
103
+ ##### batch_size=1 (compile)
104
+
105
+ ```bash
106
+ python fms-extras/scripts/paged_speculative_inference.py \
107
+ --variant=13b \
108
+ --model_path=/path/to/model_weights/llama/codellama-13B-F \
109
+ --model_source=hf \
110
+ --tokenizer=/path/to/llama/13B-F \
111
+ --speculator_path=ibm-fms/codellama-13b-accelerator \
112
+ --speculator_source=hf \
113
+ --compile \
114
+ ```
115
+
116
+ ##### batch_size=4 (compile)
117
+
118
+ ```bash
119
+ python fms-extras/scripts/paged_speculative_inference.py \
120
+ --variant=13b \
121
+ --model_path=/path/to/model_weights/llama/codellama-13B-F \
122
+ --model_source=hf \
123
+ --tokenizer=/path/to/llama/13B-F \
124
+ --speculator_path=ibm-fms/codellama-13b-accelerator \
125
+ --speculator_source=hf \
126
+ --batch_input \
127
+ --compile \
128
+ ```
129
+
130
+ Sample code can be found [here](https://github.com/foundation-model-stack/fms-extras/blob/main/scripts/paged_speculative_inference.py)