Jina CLIP
The Jina CLIP implementation is hosted in this repository. The model uses:
- the EVA 02 architecture for the vision tower
- the Jina BERT with Flash Attention model as a text tower
To use the Jina CLIP model, the following packages are required:
torchtimmtransformerseinopsxformersto use x-attentionflash-attnto use flash attentionapexto use fused layer normalization