myselfsaurabh commited on
Commit
8d7ac55
·
verified ·
1 Parent(s): 6d33066

Create README.md

Browse files

updated model card

Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ tags:
6
+ - gpt-oss
7
+ - openai
8
+ - mxfp4
9
+ - mixture-of-experts
10
+ - causal-lm
11
+ - text-generation
12
+ - cpu-gpu-offload
13
+ - colab
14
+ datasets:
15
+ - openai/gpt-oss-training-data # Placeholder; replace if known
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # Model Card for gpt-oss-20b-offload
20
+
21
+ This is a CPU+GPU offload‑ready copy of **OpenAI’s GPT‑OSS‑20B** model, an open‑source, Mixture‑of‑Experts large language model released by OpenAI in 2025.
22
+ The model here retains OpenAI’s original **MXFP4 quantization** and is configured for **memory‑efficient loading in Colab or similar GPU environments**.
23
+
24
+ ---
25
+
26
+ ## Model Details
27
+
28
+ ### Model Description
29
+
30
+ - **Developed by:** OpenAI
31
+ - **Shared by:** saurabh-srivastava (Hugging Face user)
32
+ - **Model type:** Decoder‑only transformer (Mixture‑of‑Experts) for causal language modeling
33
+ - **Active experts per token:** 4 / 32 total experts
34
+ - **Language(s):** English (with capability for multilingual text generation)
35
+ - **License:** MIT (per OpenAI GPT‑OSS release)
36
+ - **Finetuned from model:** `openai/gpt-oss-20b` (no additional fine‑tuning performed)
37
+
38
+ ### Model Sources
39
+
40
+ - **Original model repository:** [https://huggingface.co/openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
41
+ - **OpenAI announcement:** [https://openai.com/index/introducing-gpt-oss/](https://openai.com/index/introducing-gpt-oss/)
42
+
43
+ ---
44
+
45
+ ## Uses
46
+
47
+ ### Direct Use
48
+ - Text generation, summarization, and question answering.
49
+ - Running inference in low‑VRAM environments using CPU+GPU offload.
50
+
51
+ ### Downstream Use
52
+ - Fine‑tuning for domain‑specific assistants.
53
+ - Integration into chatbots or generative applications.
54
+
55
+ ### Out‑of‑Scope Use
56
+ - Generating harmful, biased, or false information.
57
+ - Any high‑stakes decision‑making without human oversight.
58
+
59
+ ---
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ Like all large language models, GPT‑OSS‑20B can:
64
+ - Produce factually incorrect or outdated information.
65
+ - Reflect biases present in its training data.
66
+ - Generate harmful or unsafe content if prompted.
67
+
68
+ ### Recommendations
69
+ - Always use with a moderation layer.
70
+ - Validate outputs for factual accuracy before use in production.
71
+
72
+ ---
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ ```python
77
+ from transformers import AutoModelForCausalLM, AutoTokenizer
78
+
79
+ model_name = "your-username/gpt-oss-20b-offload"
80
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
81
+
82
+ # Load with CPU+GPU offload
83
+ max_mem = {0: "20GiB", "cpu": "64GiB"}
84
+ model = AutoModelForCausalLM.from_pretrained(
85
+ model_name,
86
+ torch_dtype="auto",
87
+ device_map="auto",
88
+ max_memory=max_mem
89
+ )
90
+
91
+ inputs = tokenizer("Explain GPT‑OSS‑20B in one paragraph.", return_tensors="pt").to(0)
92
+ outputs = model.generate(**inputs, max_new_tokens=80)
93
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))