File size: 1,140 Bytes
b3e7f12 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
library_name: transformers
license: apache-2.0
tags:
- vision
- image-captioning
- blip
- multimodal
- fashion
datasets:
- Marqo/fashion200k
base_model:
- Salesforce/blip-image-captioning-large
---
# Fine-Tuned BLIP Model for Fashion Image Captioning
This is a fine-tuned BLIP (Bootstrapped Language-Image Pretraining) model specifically designed for **fashion image captioning**. It was fine-tuned on the **Marqo Fashion Dataset** to generate descriptive and contextually relevant captions for fashion-related images.
## Model Details
- **Model Type:** BLIP (Vision-Language Pretraining)
- **Architecture:** BLIP uses a multimodal transformer architecture to jointly model visual and textual information.
- **Fine-Tuning Dataset:** [Marqo Fashion Dataset](https://github.com/marqo-ai/marqo) (a dataset containing fashion images and corresponding captions)
- **Task:** Fashion Image Captioning
- **License:** Apache 2.0
## Usage
You can use this model with the Hugging Face `transformers` library for fashion image captioning tasks.
### Installation
First, install the required libraries:
```bash
pip install transformers torch |