spow12/MK_Nemo_12B

Model Description

This model is a Supervised fine-tuned version of Qwen/Qwen2.5-72B-Instruct with DeepSpeed and trl for korean.

Merge methods.

merge_method: model_stock
name: ChatWaifu_72B_V2.4
models:
    - model: Nexusflow/Athene-V2-Chat
    - model: Nexusflow/Athene-V2-Agent
    - model: Qwen/Qwen2.5-72B-Instruct_instruction_tunned(private)
    - model: anthracite-org/magnum-v4-72b
base_model: Qwen/Qwen2.5-72B-Instruct
dtype: bfloat16
tokenizer_source: base

Trained Data

  • Trained with public, private data (about 500K)

Usage

from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM

model_id = 'spow12/KoQwen_72B_v5.0'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",  #Optional
    device_map='auto',
)
model.eval()

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto')

generation_configs = dict(
    max_new_tokens=2048,
    num_return_sequences=1, 
    temperature=0.75,
    # repetition_penalty=1.1,
    do_sample=True,
    top_k=20,
    top_p=0.9,
    min_p=0.1,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    streamer = TextStreamer(tokenizer) # Optional, if you want to use streamer, you have to set num_beams=1
)

sys_message = """당신은 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답해야합니다. 
사용자가 제공하는 정보를 세심하게 분석하여 사용자의 의도를 신속하게 파악하고 그에 따라 답변을 생성해야합니다.  

항상 매우 자연스러운 한국어로 응답하세요."""

message = [
    {
        'role': "system",
        'content': sys_message
    },
    {
        'role': 'user',
        'content': "현재의 경제상황에 대해 어떻게 생각해?."
    }
]
conversation = pipe(message, **generation_configs)
conversation[-1]
Downloads last month
3
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for spow12/KoQwen_72B_v5.0