mciccone commited on
Commit
e108023
·
verified ·
1 Parent(s): 9ba1736

Add/update README

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # object_counting LoRA Models
2
+
3
+ This repository contains LoRA (Low-Rank Adaptation) models trained on the object_counting dataset.
4
+
5
+ ## Models in this repository:
6
+
7
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=100_seed=123
8
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr5e-05_data_size1000_max_steps=500_seed=123
9
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=500_seed=123
10
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=500_seed=123
11
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0004_data_size1000_max_steps=100_seed=123
12
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=100_seed=123
13
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=100_seed=123
14
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=500_seed=123
15
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0008_data_size1000_max_steps=500_seed=123
16
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=500_seed=123
17
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0006_data_size1000_max_steps=100_seed=123
18
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0001_data_size1000_max_steps=100_seed=123
19
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0005_data_size1000_max_steps=500_seed=123
20
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0003_data_size1000_max_steps=100_seed=123
21
+ - `llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123/`: LoRA adapter for llama_finetune_object_counting_r16_alpha=32_dropout=0.05_lr0.0002_data_size1000_max_steps=500_seed=123
22
+
23
+ ## Usage
24
+
25
+ To use these LoRA models, you'll need the `peft` library:
26
+
27
+ ```bash
28
+ pip install peft transformers torch
29
+ ```
30
+
31
+ Example usage:
32
+
33
+ ```python
34
+ from peft import PeftModel
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+
37
+ # Load base model
38
+ base_model_name = "your-base-model" # Replace with actual base model
39
+ model = AutoModelForCausalLM.from_pretrained(base_model_name)
40
+ tokenizer = AutoTokenizer.from_pretrained(base_model_name)
41
+
42
+ # Load LoRA adapter
43
+ model = PeftModel.from_pretrained(
44
+ model,
45
+ "supergoose/object_counting",
46
+ subfolder="model_name_here" # Replace with specific model folder
47
+ )
48
+
49
+ # Use the model
50
+ inputs = tokenizer("Your prompt here", return_tensors="pt")
51
+ outputs = model.generate(**inputs)
52
+ ```
53
+
54
+ ## Training Details
55
+
56
+ - Dataset: object_counting
57
+ - Training framework: LoRA/PEFT
58
+ - Models included: 15 variants
59
+
60
+ ## Files Structure
61
+
62
+ Each model folder contains:
63
+ - `adapter_config.json`: LoRA configuration
64
+ - `adapter_model.safetensors`: LoRA weights
65
+ - `tokenizer.json`: Tokenizer configuration
66
+ - Additional training artifacts
67
+
68
+ ---
69
+ *Generated automatically by LoRA uploader script*