remiai3's picture
Update README.md
120ffda verified
metadata
license: mit
language:
  - en
base_model:
  - nlpconnect/vit-gpt2-image-captioning
pipeline_tag: image-to-text
library_name: transformers
tags:
  - image-captioning

Image Captioning (CPU/GPU)

  • Model: nlpconnect/vit-gpt2-image-captioning (MIT)
  • Task: Generate a caption for a given image.
  • Note: Here we just provide the resources for to run this models in the laptops we didn't develop this entire models we just use the open source models for the experiment this model is developed by nlpconnect

Quick start (any project)

# 1) Create env
python -m venv venv && source .venv/bin/activate  # Windows: ./venv/Scripts/activate

# 2) Install deps
pip install -r requirements.txt

# 3) Run
python main.py --help

Tip: If you have a GPU + CUDA, PyTorch will auto-use it. If not, everything runs on CPU (slower but works).


and while running the main.py code using command then only you the output Use: python main.py --image remiai.png or python main.py --image sample.jpg

other wise you get the output like this usage: main.py [-h] --image IMAGE error: the following arguments are required: --image