Nellyw888 commited on
Commit
4d284a3
·
verified ·
1 Parent(s): da6a8a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,7 +12,7 @@ base_model:
12
  - Qwen/Qwen2.5-Coder-3B
13
  ---
14
 
15
- # VeriReason-Qwen2.5-3B-Verilog-RTL-GRPO-reasoning-tb
16
 
17
  For implementation details, visit our GitHub repository: [VeriReason](https://github.com/NellyW8/VeriReason)
18
 
@@ -42,7 +42,7 @@ You can use the model with the transformers library:
42
  import torch
43
  from transformers import AutoTokenizer, AutoModelForCausalLM
44
 
45
- model_name = "Nellyw888/VeriReason-Qwen2.5-3B-Verilog-RTL-GRPO-reasoning-tb"
46
  tokenizer = AutoTokenizer.from_pretrained(model_name)
47
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
48
  model.eval()
 
12
  - Qwen/Qwen2.5-Coder-3B
13
  ---
14
 
15
+ # VeriReason-Qwen2.5-3b-RTLCoder-Verilog-GRPO-reasoning-tb
16
 
17
  For implementation details, visit our GitHub repository: [VeriReason](https://github.com/NellyW8/VeriReason)
18
 
 
42
  import torch
43
  from transformers import AutoTokenizer, AutoModelForCausalLM
44
 
45
+ model_name = "Nellyw888/VeriReason-Qwen2.5-3b-RTLCoder-Verilog-GRPO-reasoning-tb"
46
  tokenizer = AutoTokenizer.from_pretrained(model_name)
47
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
48
  model.eval()