qt-spyro-hf commited on
Commit
bb4c8df
·
verified ·
1 Parent(s): d544a68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -12,16 +12,18 @@ tags:
12
  ## Description:
13
  CodeLlama-13B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-13B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs. It can be run via cloud services or locally, via Ollama.
14
 
15
- This model reaches a score of 86% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, other models scored:
16
 
17
- * CodeLlama-7B-QML: 79%
 
 
18
  * Claude 3.7 Sonnet: 76%
19
- * Claude 3.5 Sonnet: 68%
20
  * CodeLlama 13B: 66%
21
  * GPT-4o: 62%
22
  * CodeLlama 7B: 61%
23
 
24
- This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-13B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
25
 
26
  ## Terms of use:
27
  By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
@@ -61,8 +63,8 @@ curl -X POST http://localhost:11434/api/generate -d '{
61
  "Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
62
  "stream": false,
63
  "temperature": 0,
64
- "top_p": 0.9,
65
- "repeat_penalty": 1.1,
66
  "num_predict": 500,
67
  "stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
68
  }'
@@ -83,7 +85,7 @@ If there is no suffix, please use:
83
  The HuggingFace repository contains all necessary components including the .safetensors files and tokenizer configurations, giving you everything needed to modify the model across various environments and better suit your specific requirements or train it on your custom dataset.
84
 
85
  ## Model Version:
86
- v2.0
87
 
88
  ## Attribution:
89
  CodeLlama-13B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
 
12
  ## Description:
13
  CodeLlama-13B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-13B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs. It can be run via cloud services or locally, via Ollama.
14
 
15
+ This model reaches a score of 89% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, other models scored:
16
 
17
+ * CodeLlama-7B-QML: 80%
18
+ * DeepSeek V3: 87%
19
+ * Claude 4 Sonnet: 81%
20
  * Claude 3.7 Sonnet: 76%
21
+ * Codestral: 69%
22
  * CodeLlama 13B: 66%
23
  * GPT-4o: 62%
24
  * CodeLlama 7B: 61%
25
 
26
+ This model was fine-tuned based on raw data from over 5500 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-13B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
27
 
28
  ## Terms of use:
29
  By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
 
63
  "Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
64
  "stream": false,
65
  "temperature": 0,
66
+ "top_p": 1,
67
+ "repeat_penalty": 1.05,
68
  "num_predict": 500,
69
  "stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
70
  }'
 
85
  The HuggingFace repository contains all necessary components including the .safetensors files and tokenizer configurations, giving you everything needed to modify the model across various environments and better suit your specific requirements or train it on your custom dataset.
86
 
87
  ## Model Version:
88
+ v3.0
89
 
90
  ## Attribution:
91
  CodeLlama-13B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.