Running into an issue that was resolved in the Instruct model
#10
by
orr-tzafon
- opened
Hi,
Running into this issue: https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct/discussions/27 when doing inference with vLLM. Seems like it was solved for the Instruct model (https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct/discussions/28), is there a simple way to make the same fix here?