Qwen-3-150B / README.md
Delta-Vector's picture
Update README.md
7199e31 verified
metadata
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
  - Qwen/Qwen3-235B-A22B
tags:
  - prune

Same methodology as Kalomaze's 16B experiment : https://huggingface.co/kalomaze/Qwen3-16B-A3B/

  • measure the probability that any given expert will activate (over a personal set of fairly diverse calibration data), per layer
  • prune some of the least used experts per layer (with reordered router and indexing per layer)

Currently it is unusable but i am working on training it over a small SFT of claude Instruct data to "heal" it per say.

https://wandb.ai/new-eden/Prune-Experiments/runs/45utvk5c?nw=nwuserdeltavector