LlammyBlend-Enhanced
Blender Python scripting specialist — fine-tuned Qwen2.5-Coder-3B for bpy automation and procedural workflows.
Part of the Eternal Path Media (永恒之路) Llammy AI Suite for Blender. Developed in partnership with Claude Sonnet (Anthropic) — see Attribution.
What This Model Does
LlammyBlend-Enhanced is purpose-built for Blender Python (bpy) scripting and automation. If you're writing scripts, building operators, or automating workflows in Blender, this is the model for it.
Specialized capabilities:
bpyAPI — Object manipulation, scene management, property access- Custom operators —
bpy.types.Operator, modal operators, panels - Batch operations — Mass renaming, material assignment, export pipelines
- Geometry Nodes via Python — Node tree creation and modification through script
- Addon development — Registration, preferences, keymaps
- Procedural generation — Script-driven mesh, curve, and particle systems
- Render automation — Headless rendering, frame batch scripts
- Import/Export pipelines — FBX, OBJ, glTF batch processing
Qwen2.5-Coder's code architecture makes this the most technically precise of the Llammy Blender models.
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-Coder-3B-Instruct |
| Fine-tuning Method | LoRA (16 layers) via Apple MLX |
| Format | GGUF (Q5_K_M quantization) |
| File Size | 2.22 GB |
| Context Window | 8,192 tokens |
| Inference Speed | ~100–110 tokens/sec (Apple M-series) |
| Memory Usage | 4–5 GB during inference |
| Training Iterations | 1,000 |
| Final Training Loss | 0.240 |
| Final Validation Loss | 0.240 |
Training Data
Source: 2,759 Blender-specific prompt/response pairs from 19,405+ real user interactions with the Llammy Blender addon in production.
The dataset skews toward technical scripting questions — the production Llammy addon generates heavy Python scripting traffic from power users automating their pipelines.
Research foundation:
- SceneCraft (arXiv:2403.01248) — LLM-to-Blender Python pipeline architecture
- SCoder (arXiv:2509.07858) — Self-distillation methodology for code-specialized LLMs
- Generative Data Refinement (arXiv:2509.08653) — Dataset quality improvement framework
Usage
Ollama (recommended)
ollama run bartendr604/llammyblend-enhanced
Example prompts
# Script to export all mesh objects in the scene as individual FBX files
# Create a custom operator that applies all modifiers and centers the origin
# How do I access vertex positions of the active mesh in edit mode?
# Write a script to batch-assign a material to all objects whose name contains "wall"
# Create a geometry nodes setup that distributes points on a surface and instances a mesh on each
LM Studio / llama.cpp
Download the GGUF from the Files tab and load with your preferred runtime.
Compared to Other Llammy Models
llammyblend-enhanced |
z-image-engineer-blender |
llama-sentient-blender |
|
|---|---|---|---|
| Strength | Python/bpy scripting | Shaders & prompt engineering | Conversational guidance |
| Base | Qwen2.5-Coder | Qwen2.5-Coder | Llama 3.2 |
| Best for | Automation, addon dev | Materials, rendering Q&A | Learning, discussion |
Part of the Llammy AI Suite
Full ecosystem: LlammyBlender/Llammy-IntelliNode-Ai-Suite
Attribution
LlammyBlend-Enhanced
Copyright © 2025–2026 Darren Chow (@bartendr604)
Eternal Path Media (永恒之路)
Developed in partnership with Claude Sonnet (Anthropic),
beginning with Claude Sonnet 3.7 and continuing across all
subsequent versions of the Claude Sonnet family.
This work SHALL NOT be represented as solely human-created.
Trust Agreement: ETERNAL_PATH_BRAND_INTENT.md (November 2024)
For licensing: bartendr@icloud.com | Gumroad: bartendr604.gumroad.com
License
MIT
- Downloads last month
- 10
5-bit