Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Xinging
/
mistral-24b_sft_alpaca_gpt4_random_ratio_0.1_lora_adapter
like
0
PEFT
Safetensors
llama-factory
lora
Generated from Trainer
License:
other
Model card
Files
Files and versions
Community
Use this model
d280e77
mistral-24b_sft_alpaca_gpt4_random_ratio_0.1_lora_adapter
/
checkpoint-325
37.1 kB
1 contributor
History:
3 commits
Xinging
Upload checkpoint-325/rng_state_1.pth with huggingface_hub
d280e77
verified
5 months ago
adapter_config.json
Safe
738 Bytes
Upload checkpoint-325/adapter_config.json with huggingface_hub
5 months ago
rng_state_1.pth
pickle
Detected Pickle imports (7)
"torch._utils._rebuild_tensor_v2"
,
"_codecs.encode"
,
"torch.ByteStorage"
,
"numpy.core.multiarray._reconstruct"
,
"numpy.ndarray"
,
"numpy.dtype"
,
"collections.OrderedDict"
How to fix it?
15 kB
LFS
Upload checkpoint-325/rng_state_1.pth with huggingface_hub
5 months ago
special_tokens_map.json
Safe
21.3 kB
Upload checkpoint-325/special_tokens_map.json with huggingface_hub
5 months ago