File size: 1,847 Bytes
e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 e48706d 287c0b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
library_name: peft
license: llama3.2
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-classification
---
# Llama-3.2-1B-Instruct LoRA Instruction Classifier
### Model Description
- **Base Model:** [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B)
- **Adapter Method:** LoRA (Low-Rank Adaptation)
- **Task:** Instruction classification into 10 labels
<!-- Provide a longer summary of what this model is. -->
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Turalll/llama-1b-lora-instruct-classifier")
# Load the base model (you must have access to LLaMA-1B)
base_model = AutoModelForSequenceClassification.from_pretrained("path_to_llama-3.2-1B-Instruct_base_model", num_labels=10)
# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, "Turalll/llama-1b-lora-instruct-classifier")
# Example inference
text = "Your input text here"
## Custom label_ids:labels map
id2id = {
0: "Health and Wellbeing",
1: "Cinema",
2: "Environmental Science",
3: "Software Development",
4: "Fashion",
5: "Career Development",
6: "Culinary Guide",
7: "Cybersecurity",
8: "Economics",
9: "Music"
}
## Tokenize the input
inputs = tokenizer(
text,
padding="max_length",
truncation=True,
max_length=128,
return_tensors="pt"
)
## Move inputs to the same device as the model
inputs = {k: v.to(device) for k, v in inputs.items()}
## Get predictions
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax(dim=-1).item()
## Map predicted class ID to label
predicted_label = id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")
``` |