Update README.md
Browse files
README.md
CHANGED
@@ -12,15 +12,17 @@ tags:
|
|
12 |
## Description:
|
13 |
CodeLlama-7B-QML is a fine-tuned model for code completion tasks in Qt's Markup Language (QML). The CodeLlama-7B-QML model is designed for software developers who want to run their code completion LLM locally on their computer.
|
14 |
|
15 |
-
This model reaches a score of
|
16 |
-
- CodeLlama-13B-QML:
|
|
|
|
|
17 |
- Claude 3.7 Sonnet: 76%
|
18 |
-
-
|
19 |
- CodeLlama 13B: 66%
|
20 |
- GPT-4o: 62%
|
21 |
- CodeLlama 7B: 61%
|
22 |
|
23 |
-
This model was fine-tuned based on raw data from over
|
24 |
|
25 |
## Terms of use:
|
26 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
|
@@ -51,8 +53,9 @@ curl -X POST http://localhost:11434/api/generate -d '{
|
|
51 |
"model": "theqtcompany/codellama-7b-qml",
|
52 |
"Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
|
53 |
"stream": false,
|
54 |
-
"temperature": 0
|
55 |
-
"top_p":
|
|
|
56 |
"num_predict": 500,
|
57 |
"stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
|
58 |
}'
|
@@ -74,7 +77,7 @@ The HuggingFace repository contains all necessary components including the .safe
|
|
74 |
|
75 |
|
76 |
## Model Version:
|
77 |
-
|
78 |
|
79 |
## Attribution:
|
80 |
CodeLlama-7B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
|
|
12 |
## Description:
|
13 |
CodeLlama-7B-QML is a fine-tuned model for code completion tasks in Qt's Markup Language (QML). The CodeLlama-7B-QML model is designed for software developers who want to run their code completion LLM locally on their computer.
|
14 |
|
15 |
+
This model reaches a score of 80% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, other models scored:
|
16 |
+
- CodeLlama-13B-QML: 89%
|
17 |
+
- DeepSeek V3: 87%
|
18 |
+
- Claude 4 Sonnet: 81%
|
19 |
- Claude 3.7 Sonnet: 76%
|
20 |
+
- Codestral: 69%
|
21 |
- CodeLlama 13B: 66%
|
22 |
- GPT-4o: 62%
|
23 |
- CodeLlama 7B: 61%
|
24 |
|
25 |
+
This model was fine-tuned based on raw data from over 5500 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimized for the creation of Qt5-release compliant, C++, or Python code.
|
26 |
|
27 |
## Terms of use:
|
28 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to the [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions/ai-services/model-use).
|
|
|
53 |
"model": "theqtcompany/codellama-7b-qml",
|
54 |
"Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
|
55 |
"stream": false,
|
56 |
+
"temperature": 0,
|
57 |
+
"top_p": 1,
|
58 |
+
"repettion_penalty": 1.05,
|
59 |
"num_predict": 500,
|
60 |
"stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
|
61 |
}'
|
|
|
77 |
|
78 |
|
79 |
## Model Version:
|
80 |
+
v2.0
|
81 |
|
82 |
## Attribution:
|
83 |
CodeLlama-7B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|