Deployment Scripts for Medguide (Built with Gradio)
This document provides instructions for deploying the Medguide model for inference using Gradio.
Set up the Conda environment: Follow the instructions in the PKU-Alignment/align-anything repository to configure your Conda environment.
Configure the model path: After setting up the environment, update the
MODEL_PATHvariable indeploy_medguide_v.shto point to your local Medguide model directory.Verify inference script parameters: Check the following three parameters in both
multimodal_inference.py:# NOTE: Replace with your own model path if not loaded via the API base model = ''These scripts utilize an OpenAI-compatible server approach. The
deploy_medguide_v.shscript launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.Running Inference:
- Streamed Output:
bash deploy_medguide_v.sh python multimodal_inference.py
- Streamed Output: