license: mit
language:
  - en
  - ko
tags:
  - KT
  - K-intelligence
  - Mi:dm
pipeline_tag: text-generation
library_name: transformers
    Mi:dm 2.0-Base
🤗 Mi:dm 2.0 Models | 📜 Mi:dm 2.0 Technical Report* | 📕 Mi:dm 2.0 Technical Blog*
*To be released soon
News 📢
- 🔜 (Coming Soon!) GGUF format model files will be available soon for easier local deployment.
- ⚡️2025/07/04: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
 
Table of Contents
- Overview
- Usage
- More Information
Overview
Mi:dm 2.0
Mi:dm 2.0 is a "Korea-centric AI" model developed using KT's proprietary technology. The term "Korea-centric AI" refers to a model that deeply internalizes the unique values, cognitive frameworks, and commonsense reasoning inherent to Korean society. It goes beyond simply processing or generating Korean text—it reflects a deeper understanding of the socio-cultural norms and values that define Korean society.
Mi:dm 2.0 is released in two versions:
- Mi:dm 2.0-Base 
 An 11.5B parameter dense model designed to balance model size and performance.
 It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
- Mi:dm 2.0-Mini 
 A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
 It was derived from the Base model through pruning and distillation to enable compact deployment.
Neither the pre-training nor the post-training data includes KT users' data.
Quickstart
Here is the code snippet to run conversational inference with the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_name = "K-intelligence/Midm-2.0-Base-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
generation_config = GenerationConfig.from_pretrained(model_name)
prompt = "KT에 대해 소개해줘"
# message for inference
messages = [
    {"role": "system", 
     "content": "Mi:dm(믿:음)은 KT에서 개발한 AI 기반 어시스턴트이다."},
    {"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
)
output = model.generate(
    input_ids.to("cuda"),
    generation_config=generation_config,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=128,
    do_sample=False,
)
print(tokenizer.decode(output[0]))
The
transformerslibrary should be version4.45.0or higher.
Evaluation
Korean
| Model | Society & Culture | General Knowledge | Instruction Following | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| K-Refer* | K-Refer-Hard* | Ko-Sovereign* | HAERAE | Avg. | KMMLU | Ko-Sovereign* | Avg. | Ko-IFEval | Ko-MTBench | Avg. | ||
| Qwen3-4B | 53.6 | 42.9 | 35.8 | 50.6 | 45.7 | 50.6 | 42.5 | 46.5 | 75.9 | 63.0 | 69.4 | |
| Exaone-3.5-2.4B-inst | 64.0 | 67.1 | 44.4 | 61.3 | 59.2 | 43.5 | 42.4 | 43.0 | 65.4 | 74.0 | 68.9 | |
| Mi:dm 2.0-Mini-inst | 66.4 | 61.4 | 36.7 | 70.8 | 58.8 | 45.1 | 42.4 | 43.8 | 73.3 | 74.0 | 73.6 | |
| Qwen3-14B | 72.4 | 65.7 | 49.8 | 68.4 | 64.1 | 55.4 | 54.7 | 55.1 | 83.6 | 71 | 77.3 | |
| Llama-3.1-8B-inst | 43.2 | 36.4 | 33.8 | 49.5 | 40.7 | 33.0 | 36.7 | 34.8 | 60.1 | 57 | 58.5 | |
| Exaone-3.5-7.8B-inst | 71.6 | 69.3 | 46.9 | 72.9 | 65.2 | 52.6 | 45.6 | 49.1 | 69.1 | 79.6 | 74.4 | |
| Mi:dm 2.0-Base-inst | 89.6 | 86.4 | 56.3 | 81.5 | 78.4 | 57.3 | 58.0 | 57.7 | 82 | 89.7 | 85.9 | |
| Model | Comprehension | Reasoning | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| K-Prag* | K-Refer-Hard* | Ko-Best | Ko-Sovereign* | Avg. | Ko-Winogrande | Ko-Best | LogicKor | HRM8K | Avg. | |
| Qwen3-4B | 73.9 | 56.7 | 91.5 | 43.5 | 66.6 | 67.5 | 69.2 | 5.6 | 56.7 | 43.8 | 
| Exaone-3.5-2.4B-inst | 68.7 | 58.5 | 87.2 | 38.0 | 62.5 | 60.3 | 64.1 | 7.4 | 38.5 | 36.7 | 
| Mi:dm 2.0-Mini-inst | 69.5 | 55.4 | 80.5 | 42.5 | 61.9 | 61.7 | 64.5 | 7.7 | 39.9 | 37.4 | 
| Qwen3-14B | 86.7 | 74.0 | 93.9 | 52.0 | 76.8 | 77.2 | 75.4 | 6.4 | 64.5 | 48.8 | 
| Llama-3.1-8B-inst | 59.9 | 48.6 | 77.4 | 31.5 | 51.5 | 40.1 | 26.0 | 2.4 | 30.9 | 19.8 | 
| Exaone-3.5-7.8B-inst | 73.5 | 61.9 | 92.0 | 44.0 | 67.2 | 64.6 | 60.3 | 8.6 | 49.7 | 39.5 | 
| Mi:dm 2.0-Base-inst | 86.5 | 70.8 | 95.2 | 53.0 | 76.1 | 75.1 | 73.0 | 8.6 | 52.9 | 44.8 | 
* indicates KT proprietary evaluation resources.
English
| Model | Instruction | Reasoning | Math | Coding | General Knowledge | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| IFEval | BBH | GPQA | MuSR | Avg. | GSM8K | MBPP+ | MMLU-pro | MMLU | Avg. | |
| Qwen3-4B | 79.7 | 79.0 | 39.8 | 58.5 | 59.1 | 90.4 | 62.4 | - | 73.3 | 73.3 | 
| Exaone-3.5-2.4B-inst | 81.1 | 46.4 | 28.1 | 49.7 | 41.4 | 82.5 | 59.8 | - | 59.5 | 59.5 | 
| Mi:dm 2.0-Mini-inst | 73.6 | 44.5 | 26.6 | 51.7 | 40.9 | 83.1 | 60.9 | - | 56.5 | 56.5 | 
| Qwen3-14B | 83.9 | 83.4 | 49.8 | 57.7 | 63.6 | 88.0 | 73.4 | 70.5 | 82.7 | 76.6 | 
| Llama-3.1-8B-inst | 79.9 | 60.3 | 21.6 | 50.3 | 44.1 | 81.2 | 81.8 | 47.6 | 70.7 | 59.2 | 
| Exaone-3.5-7.8B-inst | 83.6 | 50.1 | 33.1 | 51.2 | 44.8 | 81.1 | 79.4 | 40.7 | 69.0 | 54.8 | 
| Mi:dm 2.0-Base-inst | 84.0 | 77.7 | 33.5 | 51.9 | 54.4 | 91.6 | 77.5 | 53.3 | 73.7 | 63.5 | 
Usage
Run on Friendli.AI
You can try our model immediately via Friendli.AI. Simply click Deploy and then Friendli Endpoints. 
Please note that a login to
Friendli.AIis required after your fifth chat interaction.
   
   
Run on Your Local Machine
We provide a detailed description about running Mi:dm 2.0 on your local machine using llama.cpp, LM Studio, and Ollama. Please check our github for more information
Deployment
To serve Mi:dm 2.0 using vLLM(>=0.8.0) with an OpenAI-compatible API:
vllm serve K-intelligence/Midm-2.0-Base-Instruct
Tutorials
To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on github. 
More Information
Limitation
- The training data for both Mi:dm 2.0 models consists primarily of English and Korean. Understanding and generation in other languages are not guaranteed. 
- The model is not guaranteed to provide reliable advice in fields that require professional expertise, such as law, medicine, or finance. 
- Researchers have made efforts to exclude unethical content from the training data — such as profanity, slurs, bias, and discriminatory language. However, despite these efforts, the model may still produce inappropriate expressions or factual inaccuracies. 
License
Mi:dm 2.0 is licensed under the MIT License.
Contact
Mi:dm 2.0 Technical Inquiries: [email protected]
