ValueError: facebook/opt-125m is not a multimodal model

#22
by aberezin - opened

ValueError: facebook/opt-125m is not a multimodal model

I got this error when I sent a request to a local vLLM server. Pls help me out.

The vLLM server started successfully using the following command:

sudo docker run --runtime nvidia --gpus all --name dotsocr --restart always -p 8000:8000 IMAGEID --served-model-name model --gpu_memory_utilization 0.3

where IMAGEID is the id of the rednotehilab/dots.ocr docker image that i pulled.

Sign up or log in to comment