Prince-1 commited on
Commit
e338850
·
verified ·
1 Parent(s): dee15ac

Add files using upload-large-folder tool

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +152 -0
  3. RKllm.txt +19 -0
  4. Seed-Coder-8B-Instruct.rkllm +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Seed-Coder-8B-Instruct.rkllm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - unsloth/Seed-Coder-8B-Instruct
5
+ pipeline_tag: text-generation
6
+ library_name: rkllm
7
+ tags:
8
+ - unsloth
9
+ - rkllm
10
+ - rk3588
11
+ base_model_relation: quantized
12
+ ---
13
+ <div>
14
+ <p style="margin-top: 0;margin-bottom: 0;">
15
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
16
+ </p>
17
+ <div style="display: flex; gap: 5px; align-items: center; ">
18
+ <a href="https://github.com/unslothai/unsloth/">
19
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
20
+ </a>
21
+ <a href="https://discord.gg/unsloth">
22
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
23
+ </a>
24
+ <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
25
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
26
+ </a>
27
+ </div>
28
+ </div>
29
+
30
+
31
+ # Seed-Coder-8B-Instruct
32
+
33
+ <div align="left" style="line-height: 1;">
34
+ <a href="https://bytedance-seed-coder.github.io/" target="_blank" style="margin: 2px;">
35
+ <img alt="Homepage" src="https://img.shields.io/badge/Seed--Coder-Homepage-a468fe?color=a468fe&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
36
+ </a>
37
+
38
+ <a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf" target="_blank" style="margin: 2px;">
39
+ <img alt="Technical Report" src="https://img.shields.io/badge/(upcoming)-Technical%20Report-brightgreen?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
40
+ </a>
41
+
42
+ <a href="https://huggingface.co/ByteDance-Seed" target="_blank" style="margin: 2px;">
43
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-ByteDance%20Seed-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
44
+ </a>
45
+
46
+ <a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE" style="margin: 2px;">
47
+ <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?color=f5de53&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
48
+ </a>
49
+ </div>
50
+
51
+
52
+ ## Introduction
53
+ We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
54
+
55
+ - **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
56
+ - **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
57
+ - **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
58
+
59
+ <p align="center">
60
+ <img width="100%" src="imgs/seed-coder_intro_performance.png">
61
+ </p>
62
+
63
+ This repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:
64
+ - Type: Causal language models
65
+ - Training Stage: Pretraining & Post-training
66
+ - Data Source: Public datasets, synthetic data
67
+ - Context Length: 32,768
68
+
69
+
70
+ ## Model Downloads
71
+ | Model Name | Length | Download | Notes |
72
+ |---------------------------------------------------------|--------|------------------------------------|-----------------------|
73
+ | Seed-Coder-8B-Base | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |
74
+ | 👉 **Seed-Coder-8B-Instruct** | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |
75
+ | Seed-Coder-8B-Reasoning | 64K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |
76
+ | Seed-Coder-8B-Reasoning-bf16 | 64K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |
77
+
78
+ ## Requirements
79
+ You will need to install the latest versions of `transformers` and `accelerate`:
80
+
81
+ ```bash
82
+ pip install -U transformers accelerate
83
+ ```
84
+
85
+ ## Quickstart
86
+
87
+ Here is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:
88
+
89
+ ```python
90
+ from transformers import AutoTokenizer, AutoModelForCausalLM
91
+ import torch
92
+
93
+ model_id = "ByteDance-Seed/Seed-Coder-8B-Instruct"
94
+
95
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
96
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
97
+
98
+ messages = [
99
+ {"role": "user", "content": "Write a quick sort algorithm."},
100
+ ]
101
+
102
+ input_ids = tokenizer.apply_chat_template(
103
+ messages,
104
+ tokenize=True,
105
+ return_tensors="pt",
106
+ add_generation_prompt=True,
107
+ ).to(model.device)
108
+
109
+ outputs = model.generate(input_ids, max_new_tokens=512)
110
+ response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
111
+ print(response)
112
+
113
+ ```
114
+
115
+ ## Evaluation
116
+
117
+ Seed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.
118
+
119
+ | Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 – 2502) |
120
+ |:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|
121
+ | CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 25.7 | 4.1 | 3.6 |
122
+ | DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 43.8 | 15.5 | 9.6 |
123
+ | CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 43.6 | 15.5 | 3.0 |
124
+ | Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 49.0 | 17.6 | 17.5 |
125
+ | Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 40.5 | 13.5 | 11.5 |
126
+ | OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 50.9 | 18.9 | 17.1 |
127
+ | Qwen2.5-Coder-7B-Instruct | **88.4** | 83.5 | 26.7 | 48.8 | 20.3 | 17.3 |
128
+ | Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |
129
+ | Seed-Coder-8B-Instruct | 84.8 | **85.2** | **36.2** | **53.3** | **26.4** | **24.7** |
130
+
131
+
132
+ For detailed benchmark performance, please refer to our [📑 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).
133
+
134
+ ## License
135
+
136
+ This project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.
137
+
138
+ <!-- ## Citation
139
+
140
+ If you find our work helpful, feel free to give us a cite.
141
+
142
+ ```
143
+ @article{zhang2025seedcoder,
144
+ title={Seed-Coder: Let the Code Model Curate Data for Itself},
145
+ author={Xxx},
146
+ year={2025},
147
+ eprint={2504.xxxxx},
148
+ archivePrefix={arXiv},
149
+ primaryClass={cs.CL},
150
+ url={https://arxiv.org/abs/xxxx.xxxxx},
151
+ }
152
+ ``` -->
RKllm.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO: rkllm-toolkit version: 1.2.1b1
2
+ Loading model
3
+ Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:02<00:00, 1.35it/s]
4
+ /teamspace/studios/this_studio/RK
5
+ /teamspace/studios/this_studio/RK/../data/datasets.json
6
+ Building model
7
+ Building model: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 423/423 [00:44<00:00, 9.53it/s]
8
+ Build model success!
9
+ Exporting model to: Seed-Coder-8B-Instruct.rkllm
10
+ INFO: Setting chat_template to "<[begin▁of▁sentence]>system\nYou are an AI programming assistant, utilizing the Seed-Coder model, developed by ByteDance Seed, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n\n<[end▁of▁sentence]><[begin▁of▁sentence]>user\n[content]<[end▁of▁sentence]><[begin▁of▁sentence]>assistant\n"
11
+ INFO: Setting token_id of bos to 0
12
+ INFO: Setting token_id of eos to 2
13
+ INFO: Setting token_id of sep to 6
14
+ INFO: Setting token_id of pad to 1
15
+ Converting model: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 291/291 [00:00<00:00, 2097152.00it/s]
16
+ INFO: Setting max_context_limit to 4096
17
+ INFO: Exporting the model, please wait ....
18
+ [=================================================>] 681/681 (100%)
19
+ INFO: Model has been saved to Seed-Coder-8B-Instruct.rkllm!
Seed-Coder-8B-Instruct.rkllm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e197e04f718c6300a7487f5a3b9fda85185d5f7dc84674182b2eeaa2920dffa9
3
+ size 16533993358