Exported with (Inferentia 2 only):

optimum-cli export neuron --model black-forest-labs/FLUX.1-Kontext-dev --tensor_parallel_size 8 --batch_size 1 --height 1024 --width 1024 --num_images_per_prompt 1 --sequence_length 512 --torch_dtype bfloat16 flux_kontext_neuron_1024_tp8/

Inference:

from diffusers.utils import load_image

from optimum.neuron import NeuronFluxKontextPipeline


pipe = NeuronFluxKontextPipeline.from_pretrained("Jingya/Flux.1-Kontext-dev-1024x1024-neuronx-tp8")
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(
  image=input_image,
  prompt="Add a hat to the cat",
  guidance_scale=2.5
).images[0]
image = image.resize(input_image.size)
image.save("flux_kontext.png")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support