Instructions to use AITRADER/FLUX2-klein-4B-mlx-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use AITRADER/FLUX2-klein-4B-mlx-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir FLUX2-klein-4B-mlx-4bit AITRADER/FLUX2-klein-4B-mlx-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
FLUX.2 klein 4B (MLX 4-bit)
Pre-quantized MLX weights in the mflux format for faster local loading.
Usage (mflux)
uv tool install --upgrade mflux
mflux-generate-flux2 --model AITRADER/FLUX2-klein-4B-mlx-4bit --base-model flux2-klein-4b --prompt "A puffin standing on a cliff" --width 1024 --height 1024 --steps 50 --guidance 3.5 \
--seed 42 --output image.png
Notes
- This is not an official Black Forest Labs release; it is a convenience repackage for MLX.
- For the original model, see
black-forest-labs/FLUX.2-klein-4B.
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AITRADER/FLUX2-klein-4B-mlx-4bit
Base model
black-forest-labs/FLUX.2-klein-4B