n1ck-guo commited on
Commit
fd349f7
·
verified ·
1 Parent(s): 97a051f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - openai/gpt-oss-20b
4
+ ---
5
+
6
+ ## Model Details
7
+
8
+ This model is a gguf q4ks format of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
9
+
10
+ Please follow the license of the original model.
11
+
12
+ ## How To Use
13
+
14
+ Llamacpp inference
15
+
16
+ ~~~bash
17
+ /llama-cli -hf Intel/gpt-oss-20b-gguf-q4ks-AutoRound
18
+ ~~~
19
+
20
+ ~~~bash
21
+ > Write a quick sort algorithm.
22
+ // put pivot into final place
23
+ T tmp = a[left]; a[left] = a[j]; a[j] = tmp;
24
+ return j;
25
+ }
26
+ ```
27
+
28
+ ---
29
+
30
+ ## 4. JavaScript (in‑place)
31
+
32
+ ```javascript
33
+ function quickSort(arr, left = 0, right = arr.length - 1) {
34
+ if (left >= right) return;
35
+
36
+ const pivot = arr[left];
37
+ let i = left + 1, j = right;
38
+
39
+ while (true) {
40
+ while (i <= right && arr[i] < pivot) i++;
41
+ while (j >= left + 1 && arr[j] > pivot) j--;
42
+ if (i >= j) break;
43
+ [arr[i], arr[j]] = [arr[j], arr[i]];
44
+ }
45
+ [arr[left], arr[j]] = [arr[j], arr[left]]; // pivot in place
46
+
47
+ quickSort(arr, left, j - 1);
48
+ quickSort(arr, j + 1, right);
49
+ }
50
+
51
+ // ---- Example ----------------------------------------------------
52
+ let data = [3, 6, 8, 10, 1, 2, 1];
53
+ quickSort(data);
54
+ console.log(data); // [1, 1, 2, 3, 6, 8, 10]
55
+ ```
56
+
57
+ ---
58
+
59
+ ### Quick‑Sort Tips
60
+
61
+ | Problem | Fix |
62
+ |---------|-----|
63
+ | **Worst‑case O(n²)** when the pivot is always the smallest/largest element | Pick the middle element or use median‑of‑three pivot. |
64
+ | **Stack overflow** on very deep recursion | Convert recursion to iteration (explicit stack) or switch to an iterative algorithm. |
65
+ | **Unstable sorting** | If stability matters, use a stable algorithm (e.g., merge‑sort) or add an index to each element and compare that as a tie‑breaker. |
66
+ | **Large duplicates** | Use “Dutch‑Flag” partitioning that groups `< pivot`, `== pivot`, `> pivot`. |
67
+
68
+ ---
69
+
70
+ ### Final Word
71
+
72
+ Quick‑sort is a classic divide‑and‑conquer algorithm that works well for average‑case sorting.
73
+ The snippets above are short, in‑place, and can be dropped into most code bases.
74
+
75
+ Happy coding!
76
+
77
+ ~~~
78
+
79
+ ### Generate the model
80
+
81
+ Here is the sample command to reproduce the model
82
+
83
+ ```bash
84
+ auto_round --format gguf:q4_k_s --iters 0 --nsamples 512 --model openai/gpt-oss-20b --output_dir tmp_autoround
85
+ ```
86
+
87
+
88
+
89
+ ## Ethical Considerations and Limitations
90
+
91
+ The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
92
+
93
+ Therefore, before deploying any applications of the model, developers should perform safety testing.
94
+
95
+ ## Caveats and Recommendations
96
+
97
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
98
+
99
+ Here are a couple of useful links to learn more about Intel's AI software:
100
+
101
+ - Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
102
+
103
+ ## Disclaimer
104
+
105
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
106
+
107
+ ## Cite
108
+
109
+ @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
110
+
111
+ [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)