Text Classification
Transformers
Safetensors
roberta
Generated from Trainer
cedricbonhomme commited on
Commit
1809d88
·
verified ·
1 Parent(s): ab225d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -13
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- license: mit
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
@@ -9,30 +9,59 @@ metrics:
9
  model-index:
10
  - name: vulnerability-severity-classification-roberta-base
11
  results: []
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # vulnerability-severity-classification-roberta-base
 
 
 
 
 
 
 
 
18
 
19
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.5104
22
- - Accuracy: 0.8285
23
 
24
  ## Model description
25
 
 
 
 
26
  More information needed
27
 
28
- ## Intended uses & limitations
29
 
30
- More information needed
31
 
32
- ## Training and evaluation data
 
 
33
 
34
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
 
 
 
 
 
36
  ## Training procedure
37
 
38
  ### Training hyperparameters
@@ -62,4 +91,4 @@ The following hyperparameters were used during training:
62
  - Transformers 4.55.2
63
  - Pytorch 2.8.0+cu128
64
  - Datasets 4.0.0
65
- - Tokenizers 0.21.4
 
1
  ---
2
  library_name: transformers
3
+ license: cc-by-4.0
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
 
9
  model-index:
10
  - name: vulnerability-severity-classification-roberta-base
11
  results: []
12
+ datasets:
13
+ - CIRCL/vulnerability-scores
14
  ---
15
 
16
+ # VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification
 
17
 
18
+ # Severity classification
19
+
20
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the dataset [CIRCL/vulnerability-scores](https://huggingface.co/datasets/CIRCL/vulnerability-scores).
21
+
22
+ The model was presented in the paper [VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification](https://huggingface.co/papers/2507.03607) [[arXiv](https://arxiv.org/abs/2507.03607)].
23
+
24
+ **Abstract:** VLAI is a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.
25
+
26
+ You can read [this page](https://www.vulnerability-lookup.org/user-manual/ai/) for more information.
27
 
 
 
 
 
28
 
29
  ## Model description
30
 
31
+ - Loss: 0.5104
32
+ - Accuracy: 0.8285
33
+
34
  More information needed
35
 
36
+ It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.
37
 
38
+ ## How to get started with the model
39
 
40
+ ```python
41
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
42
+ import torch
43
 
44
+ labels = ["low", "medium", "high", "critical"]
45
+
46
+ model_name = "CIRCL/vulnerability-severity-classification-roberta-base"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
48
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
49
+ model.eval()
50
+
51
+ test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
52
+ that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
53
+ inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
54
+
55
+ # Run inference
56
+ with torch.no_grad():
57
+ outputs = model(**inputs)
58
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
59
 
60
+ # Print results
61
+ print("Predictions:", predictions)
62
+ predicted_class = torch.argmax(predictions, dim=-1).item()
63
+ print("Predicted severity:", labels[predicted_class])
64
+ ```
65
  ## Training procedure
66
 
67
  ### Training hyperparameters
 
91
  - Transformers 4.55.2
92
  - Pytorch 2.8.0+cu128
93
  - Datasets 4.0.0
94
+ - Tokenizers 0.21.4