Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!

#53
by Rath031 - opened
/usr/local/lib/python3.11/dist-packages/transformers/models/qwen2/modeling_qwen2.py in forward(self, hidden_state)
    221 
    222     def forward(self, hidden_state):
--> 223         return self.down_proj(self.act_fn(self.gate_proj(hidden_state)) * self.up_proj(hidden_state))
    224 
    225 

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!

when following code from https://github.com/OpenBMB/MiniCPM-o/blob/main/docs/inference_on_multiple_gpus.md with MODEL_PATH = snapshot_download(repo_id="openbmb/MiniCPM-o-2_6").

help please

Sign up or log in to comment