Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,74 @@ tags:
|
|
14 |
- symbioticai
|
15 |
- llm
|
16 |
- Symbols
|
17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
- symbioticai
|
15 |
- llm
|
16 |
- Symbols
|
17 |
+
---
|
18 |
+
|
19 |
+
# SymbioticLM-14B
|
20 |
+
|
21 |
+
|
22 |
+
**Model Type**: Hybrid Symbolic–Transformer with Persistent Memory
|
23 |
+
**Base Model**: Qwen-14B
|
24 |
+
**Framework**: PyTorch + HuggingFace Transformers
|
25 |
+
**Purpose**: Full-scale cognitive reasoning model with self-organizing memory and generative symbolic evolution
|
26 |
+
|
27 |
+
---
|
28 |
+
|
29 |
+
## Overview
|
30 |
+
|
31 |
+
SymbioticLM-14B is a state-of-the-art 17.8 billion parameter symbolic–transformer hybrid model that tightly couples high-capacity neural representation with structured symbolic cognition. Designed to match or exceed performance of top-tier LLMs in symbolic domains, it supports persistent memory, entropic recall, multi-stage symbolic routing, and self-organizing knowledge structures.
|
32 |
+
|
33 |
+
This model is ideal for advanced reasoning agents, research assistants, and symbolic math/code generation systems.
|
34 |
+
|
35 |
+
---
|
36 |
+
|
37 |
+
## Architecture Highlights
|
38 |
+
|
39 |
+
- **Backbone**: Qwen-14B transformer with rotary embeddings + FlashAttention
|
40 |
+
- **Symbolic Dim**: 8192
|
41 |
+
- **Symbolic Modules**:
|
42 |
+
- ThoughtDynamicsLNN (multi-head LSTM attention)
|
43 |
+
- LiquidThoughtProcessor
|
44 |
+
- CrystallineProcessor (DNAConv GNN)
|
45 |
+
- HelicalDNAProcessor (linear helical encoding)
|
46 |
+
- **Memory**: 4096 symbolic states in FP32, retrieved using entropy + contextual similarity
|
47 |
+
- **Dream Mode**: Background symbolic simulation for open-ended cognition
|
48 |
+
- **Router**: Intent classifier + entropy gating for processor path selection
|
49 |
+
|
50 |
+
---
|
51 |
+
|
52 |
+
## Files Included
|
53 |
+
|
54 |
+
| File | Description |
|
55 |
+
|--------------------------|----------------------------------------------------------|
|
56 |
+
| `model.bin` | Transformer weights (LFS) |
|
57 |
+
| `model.safetensors` | Memory-safe weights, optimized for loading |
|
58 |
+
| `memory.pt` | 4096-symbolic vector bank |
|
59 |
+
| `config.json` | Model and architectural metadata |
|
60 |
+
| `generation_config.json` | Top-p, temperature, decoding settings |
|
61 |
+
| `tokenizer.json` | Full tokenizer with symbolic tag support |
|
62 |
+
| `added_tokens.json` | Tags like `<D_LIM>`, `<PROOF>`, `<BY_MEASURE>`, etc. |
|
63 |
+
| `special_tokens_map.json`| Special token mapping for tokenizer |
|
64 |
+
|
65 |
+
---
|
66 |
+
|
67 |
+
## Intended Uses
|
68 |
+
|
69 |
+
- Multi-step conversational agents with true memory
|
70 |
+
- Long-form symbolic theorem generation and proof planning
|
71 |
+
- Scientific dialogue, symbolic simulations, math/code synthesis
|
72 |
+
- Reasoning in fuzzy, discontinuous, or non-smooth problem domains
|
73 |
+
|
74 |
+
---
|
75 |
+
|
76 |
+
## Limitations
|
77 |
+
|
78 |
+
- Memory requires curation and seeding for maximum utility
|
79 |
+
- Symbolic cognition is not instruction-tuned for general QA
|
80 |
+
- FlashAttention and symbolic modules increase VRAM usage during generation
|
81 |
+
|
82 |
+
---
|
83 |
+
|
84 |
+
## Citations
|
85 |
+
|
86 |
+
|
87 |
+
Please cite "SymbioticLM" when using symbolic memory components in research or applications.
|