Update README.md
Browse files
README.md
CHANGED
@@ -97,10 +97,10 @@ model-index:
|
|
97 |
|
98 |
We measured the average inference speed (tokens/s) of generating 1024 new tokens and 5198 (8192-2998) tokens with the context of an video (which takes 2998 tokens) under BF16 precision.
|
99 |
|
100 |
-
|Quantization | Speed (3022 tokens) | Speed (8192 tokens)|
|
101 |
-
|--- |--- |---|
|
102 |
-
|BF16 | 33.40 | 31.91 |
|
103 |
-
|INT4 | - | 31.95 |
|
104 |
|
105 |
|
106 |
## 🚀 How to use the model
|
|
|
97 |
|
98 |
We measured the average inference speed (tokens/s) of generating 1024 new tokens and 5198 (8192-2998) tokens with the context of an video (which takes 2998 tokens) under BF16 precision.
|
99 |
|
100 |
+
|Quantization | Speed (3022 tokens) | Speed (8192 tokens) w/o vision| Speed(8192 tokens) w/ vision|
|
101 |
+
|--- |--- |---| ---|
|
102 |
+
|BF16 | 33.40 | 31.91 | 21.33|
|
103 |
+
|INT4 | - | 31.95 | - |
|
104 |
|
105 |
|
106 |
## 🚀 How to use the model
|