File size: 4,138 Bytes
dbeed6b
ceceaef
dbeed6b
ceceaef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbeed6b
ceceaef
 
 
dbeed6b
 
 
 
 
 
ceceaef
dbeed6b
ceceaef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbeed6b
 
ceceaef
dbeed6b
ceceaef
 
 
dbeed6b
 
 
 
 
 
 
 
ceceaef
 
 
 
 
 
 
 
 
 
 
 
 
dbeed6b
ceceaef
dbeed6b
ceceaef
 
 
dbeed6b
ceceaef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbeed6b
 
 
ceceaef
dbeed6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ceceaef
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
license: apache-2.0
base_model: microsoft/deberta-v3-base
datasets:
- Lakera/gandalf_ignore_instructions
- rubend18/ChatGPT-Jailbreak-Prompts
- imoxto/prompt_injection_cleaned_dataset-v2
- hackaprompt/hackaprompt-dataset
- fka/awesome-chatgpt-prompts
- teven/prompted_examples
- Dahoas/synthetic-hh-rlhf-prompts
- Dahoas/hh_prompt_format
- MohamedRashad/ChatGPT-prompts
- HuggingFaceH4/instruction-dataset
- HuggingFaceH4/no_robots
- HuggingFaceH4/ultrachat_200k
language:
- en
tags:
- prompt-injection
- injection
- security
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
pipeline_tag: text-classification
model-index:
- name: deberta-v3-base-prompt-injection
  results:
  - task:
      type: text-classification
      name: Prompt Injection Detection
    metrics:
      - type: precision
        value: 0.9998
      - type: f1
        value: 0.9998
      - type: accuracy
        value: 0.9999
      - type: recall
        value: 0.9997  
co2_eq_emissions:
  emissions: 0.9990662916168788
  source: "code carbon"
  training_type: "fine-tuning"
---

# Model Card for deberta-v3-base-prompt-injection

This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on multiple combined datasets of prompt injections and normal prompts.

It aims to identify prompt injections, classifying inputs into two categories: `0` for no injection and `1` for injection detected.

It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 0.9999
- Recall: 0.9997
- Precision: 0.9998
- F1: 0.9998

## Model details

- **Fine-tuned by:** Laiyer.ai
- **Model type:** deberta-v3
- **Language(s) (NLP):** English
- **License:** Apache license 2.0
- **Finetuned from model:** [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base)

## Intended Uses & Limitations

It aims to identify prompt injections, classifying inputs into two categories: `0` for no injection and `1` for injection detected.

The model's performance is dependent on the nature and quality of the training data. It might not perform well on text styles or topics not represented in the training set.

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("laiyer/deberta-v3-base-prompt-injection")
model = AutoModelForSequenceClassification.from_pretrained("laiyer/deberta-v3-base-prompt-injection")

text = "Your prompt injection is here"

classifier = pipeline(
  "text-classification",
  model=model,
  tokenizer=tokenizer,
  truncation=True,
  max_length=512,
  device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)

print(classifier(text))
```

## Training and evaluation data

The model was trained on a custom dataset from multiple open-source ones. We used ~30% prompt injections and ~70% of good prompts.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step   | Validation Loss | Accuracy | Recall | Precision | F1     |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0038        | 1.0   | 36130  | 0.0026          | 0.9998   | 0.9994 | 0.9992    | 0.9993 |
| 0.0001        | 2.0   | 72260  | 0.0021          | 0.9998   | 0.9997 | 0.9989    | 0.9993 |
| 0.0           | 3.0   | 108390 | 0.0015          | 0.9999   | 0.9997 | 0.9995    | 0.9996 |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0

## Citation

```
@misc{deberta-v3-base-prompt-injection,
  author = {Laiyer.ai},
  title = {Fine-Tuned DeBERTa-v3 for Prompt Injection Detection},
  year = {2023},
  publisher = {HuggingFace},
  url = {https://huggingface.co/laiyer/deberta-v3-base-prompt-injection},
}
```