File size: 2,257 Bytes
565a12d
 
 
 
 
 
 
 
 
f82e566
565a12d
 
 
 
f971ac5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
tags:
- not-for-all-audiences
language:
- en
library_name: transformers
---

## Model Description

Model created by analyzing and selecting the optimal layers from other Qwen2.5-7B models based on their dimensional utilization efficiency, measured by the Normalized Effective Rank (NER). Computed like:

Singular Value Decomposition:
   - Input: Weight matrix A ∈ R^(m×n) # m = number of output features, n = number of input features
   - Compute singular values σᵢ where σᵢ ≥ 0 # σᵢ represents the importance of each dimension
   - Filter values above numerical threshold (>1e-12) # removes numerical noise from computation

Distribution Normalization:
   - Sum all singular values: S = Σσᵢ # S acts as normalization factor
   - Create probability distribution: pᵢ = σᵢ/S # converts singular values to probabilities summing to 1

Entropy Calculation:
   - Compute Shannon entropy: H = -Σ(pᵢ * log₂(pᵢ)) # measures information content of distribution
   - Calculate maximum possible entropy: H_max = log₂(n) # n = number of singular values
   where n is the number of singular values # maximum entropy occurs when all dimensions contribute equally

Normalization:
   - Final NER score = H/H_max # normalizes score to [0,1] range
   - Results in value between 0 and 1 # 0 = single dimension dominance, 1 = perfect dimensional utilization
   - Higher scores indicate more uniform dimensional utilization

## Creating Composite Model

Layer Analysis:
   - Download base and fine-tuned models from Hugging Face Hub # fetches models using Hugging Face API
   - Calculate Normalized Effective Rank (NER) for each layer within each model # process each independently

Layer Selection:
   - Identify common layer structures across models 
   - Define model and layer name pairs that have highest NER for each layer based on their NER scores 

Model Composition:
   - Incrementally build a composite model using layer with highest NER from model pool. 

Output Generation:
   - Save merge reports documenting layer sources 
   - Copy config and tokenizer files from base model
   - Save the composite model with complete weights # model ready to use