dereklck commited on
Commit
e0c54cd
·
verified ·
1 Parent(s): 1be7e21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -20
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -13,11 +13,11 @@ language:
13
 
14
  ---
15
 
16
- # kubectl Operator Model
17
 
18
  - **Developed by:** dereklck
19
  - **License:** Apache-2.0
20
- - **Fine-tuned from model:** [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)
21
  - **Model type:** GGUF (compatible with Ollama)
22
  - **Language:** English
23
 
@@ -27,7 +27,10 @@ This Llama-based model was fine-tuned to assist users with Kubernetes commands a
27
  2. **Providing concise explanations about Kubernetes** for general queries.
28
  3. **Politely requesting additional information** if the instruction is incomplete or ambiguous.
29
 
 
 
30
  The model was trained efficiently using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library.
 
31
  ---
32
 
33
  ## How to Use the Model
@@ -92,17 +95,17 @@ This section provides instructions on how to run the model using Ollama and the
92
  Open your terminal and run the following command to create the model:
93
 
94
  ```bash
95
- ollama create kubernetes_operator -f Modelfile
96
  ```
97
 
98
- This command tells Ollama to create a new model named `kubernetes_operator` using the configuration specified in `Modelfile`.
99
 
100
  3. **Run the Model**
101
 
102
  Start interacting with your model:
103
 
104
  ```bash
105
- ollama run kubernetes_operator
106
  ```
107
 
108
  This will initiate the model and prompt you for input based on the template provided.
@@ -110,7 +113,7 @@ This section provides instructions on how to run the model using Ollama and the
110
  Alternatively, you can provide an instruction directly:
111
 
112
  ```bash
113
- ollama run kubernetes_operator -p "Create a namespace called 'production'."
114
  ```
115
 
116
  **Example Output:**
@@ -128,8 +131,8 @@ This section provides instructions on how to run the model using Ollama and the
128
  The model assists users by:
129
 
130
  - **Generating accurate `kubectl` commands** based on natural language descriptions.
131
- - **Providing brief explanations about Kubernetes** for general queries.
132
- - **Requesting additional information** if the instruction is incomplete or ambiguous.
133
 
134
  ### Intended Users
135
 
@@ -139,21 +142,35 @@ The model assists users by:
139
 
140
  ### Training Process
141
 
142
- - **Base Model:** Unsloth's Llama-3.2-1B-Instruct-bnb-4bit
143
  - **Fine-tuning:** Leveraged the Unsloth framework and Hugging Face's TRL library for efficient training.
144
- - **Training Data:** Customized dataset focused on Kubernetes operations and `kubectl` command usage, containing approximately 200 entries.
145
 
146
  ---
147
 
148
  ## Model Features
149
 
150
- ### Command Generation in Bash Format
 
 
 
 
151
 
152
- When the model generates CLI commands, it provides them in `bash` format, enclosed within code blocks for easy execution and clarity. This helps users to copy and paste the commands directly into their terminal.
153
 
154
- ### Examples
155
 
156
- #### **Example 1: Generating a Command**
 
 
 
 
 
 
 
 
 
 
157
 
158
  **Instruction:**
159
 
@@ -169,7 +186,7 @@ kubectl create namespace development
169
 
170
  ---
171
 
172
- #### **Example 2: Handling Ambiguity**
173
 
174
  **Instruction:**
175
 
@@ -185,7 +202,7 @@ Which pod's logs would you like to retrieve?
185
 
186
  ---
187
 
188
- #### **Example 3: Providing Explanations**
189
 
190
  **Instruction:**
191
 
@@ -203,8 +220,7 @@ A Deployment provides declarative updates for Pods and ReplicaSets, allowing you
203
 
204
  ## Limitations and Considerations
205
 
206
- - **Accuracy:** The model may occasionally produce incorrect or suboptimal commands. Always review the output before execution.
207
- - **Hallucinations:** In rare cases, the model might generate irrelevant or incorrect information. If the response seems off-topic, consider rephrasing your instruction.
208
  - **Security:** Be cautious when executing generated commands, especially in production environments.
209
 
210
  ---
@@ -218,4 +234,8 @@ We welcome any comments or participation to improve the model and dataset. If yo
218
 
219
  ---
220
 
221
- **Note:** This model provides assistance in generating `kubectl` commands based on user input. Always verify the generated commands in a safe environment before executing them in a production cluster.
 
 
 
 
 
1
  ---
2
+ base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
13
 
14
  ---
15
 
16
+ # Hybrid Kubernetes Feature Model
17
 
18
  - **Developed by:** dereklck
19
  - **License:** Apache-2.0
20
+ - **Fine-tuned from model:** [unsloth/Llama-3.2-3B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-bnb-4bit)
21
  - **Model type:** GGUF (compatible with Ollama)
22
  - **Language:** English
23
 
 
27
  2. **Providing concise explanations about Kubernetes** for general queries.
28
  3. **Politely requesting additional information** if the instruction is incomplete or ambiguous.
29
 
30
+ **Update:** Compared to the previous 1B model, the **3B model significantly reduces hallucinations** that sometimes occurred in the 1B model. Users can expect improved accuracy and reliability when interacting with this model.
31
+
32
  The model was trained efficiently using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library.
33
+
34
  ---
35
 
36
  ## How to Use the Model
 
95
  Open your terminal and run the following command to create the model:
96
 
97
  ```bash
98
+ ollama create hybrid_kubernetes_feature_model -f Modelfile
99
  ```
100
 
101
+ This command tells Ollama to create a new model named `hybrid_kubernetes_feature_model` using the configuration specified in `Modelfile`.
102
 
103
  3. **Run the Model**
104
 
105
  Start interacting with your model:
106
 
107
  ```bash
108
+ ollama run hybrid_kubernetes_feature_model
109
  ```
110
 
111
  This will initiate the model and prompt you for input based on the template provided.
 
113
  Alternatively, you can provide an instruction directly:
114
 
115
  ```bash
116
+ ollama run hybrid_kubernetes_feature_model -p "Create a namespace called 'production'."
117
  ```
118
 
119
  **Example Output:**
 
131
  The model assists users by:
132
 
133
  - **Generating accurate `kubectl` commands** based on natural language descriptions.
134
+ - **Providing concise explanations about Kubernetes** for general queries.
135
+ - **Politely requesting additional information** if the instruction is incomplete or ambiguous.
136
 
137
  ### Intended Users
138
 
 
142
 
143
  ### Training Process
144
 
145
+ - **Base Model:** Unsloth's Llama-3.2-3B-Instruct-bnb-4bit
146
  - **Fine-tuning:** Leveraged the Unsloth framework and Hugging Face's TRL library for efficient training.
147
+ - **Training Data:** Customized dataset focused on Kubernetes operations and features, including `kubectl` command usage and general Kubernetes concepts, containing approximately 1,500 entries.
148
 
149
  ---
150
 
151
  ## Model Features
152
 
153
+ ### 1. Command Generation in Bash Format
154
+
155
+ When the model generates CLI commands, it provides them in `bash` format, enclosed within code blocks for easy execution and clarity. This allows users to copy and paste the commands directly into their terminal.
156
+
157
+ ### 2. Handling Ambiguity with Polite Clarifications
158
 
159
+ If the instruction is incomplete or ambiguous, the model will politely ask for the specific missing information instead of making assumptions. This ensures accuracy and prevents the execution of incorrect commands.
160
 
161
+ ### 3. Providing Concise Explanations
162
 
163
+ For general Kubernetes questions, the model offers concise and accurate explanations without unnecessary details, helping users understand concepts quickly.
164
+
165
+ ### 4. Enhanced Accuracy with 3B Model
166
+
167
+ The transition to the 3B model has significantly **reduced hallucinations** that sometimes occurred in the 1B model. The 3B model provides more accurate and reliable responses, improving the overall user experience.
168
+
169
+ ---
170
+
171
+ ## Examples
172
+
173
+ ### Example 1: Generating a Command
174
 
175
  **Instruction:**
176
 
 
186
 
187
  ---
188
 
189
+ ### Example 2: Handling Ambiguity
190
 
191
  **Instruction:**
192
 
 
202
 
203
  ---
204
 
205
+ ### Example 3: Providing Explanations
206
 
207
  **Instruction:**
208
 
 
220
 
221
  ## Limitations and Considerations
222
 
223
+ - **Accuracy:** While the 3B model improves accuracy, the model may occasionally produce incorrect or suboptimal commands. Always review the output before execution.
 
224
  - **Security:** Be cautious when executing generated commands, especially in production environments.
225
 
226
  ---
 
234
 
235
  ---
236
 
237
+ **Note:** This model provides assistance in generating Kubernetes commands and explanations based on user input. Always verify the generated commands in a safe environment before executing them in a production cluster.
238
+
239
+ ---
240
+
241
+ If you have any further requests or need additional adjustments, please let me know!