File size: 1,710 Bytes
98edad8
 
 
 
7c13b4d
98edad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bff83e
98edad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e2e4f0
 
 
98edad8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrachat_200k
- yahma/alpaca-cleaned
language:
- en
pipeline_tag: text-generation
tags:
- mesh
- moe
- mesh-labs
- alpha
- preview
- research
- experiment
- routing
- innovative
- innovation
- mesh-moe
- custom_code
new_version: mesh-labs/v0.1-2x2-stage003
---

# Mesh-v0.1-2x2 (Stage 002)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6747320df82ae35f0327cdd3/2JPwH3coASgEc4vJvJVRt.png)

## Introducing mesh

This is our first ever model! Allow us to explain how the `mesh` architecture works in detail.

- Neural Mesh extends the concept of Mixture of Experts by allowing bidirectional expert communication.

- The experts are shared in a bidimensional grid (2x2, 4x4, etc.) layout, that allows for them to communicate with their neighbors using the "Neighbor Exchange" method.
- Just like MoE models, Mesh models have dynamic routing, and through the `routing_k` parameter you can define the amount of active parameters. For this model (2x2):
  - top-1 routing: 173M active parameters
  - top-2 routing: 242M active parameters (default)
  - dense routing: 302M active parameters

## Here's how the mesh architecture works:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6747320df82ae35f0327cdd3/WRpS2T5KBMPbacobfh0bw.png)

## Evaluation
<img src="https://cdn-uploads.huggingface.co/production/uploads/6747320df82ae35f0327cdd3/gYBBCS2d7mUCvSFHE8fBc.png" width="512px"/>

## Disclaimer
This small language model is just a proof-of-concept, paving the way to the final release, which is likely to happen in Q4 2025, and include more models and better support from external libraries such as Transformers and Llama.cpp.