Basically-Human-4B-f32-GGUF

Basically-Human-4B is a 4 billion parameter language model based on the Qwen3 4B architecture, fine-tuned specifically for immersive and emotionally resonant roleplaying and character interaction. It excels in maintaining in-character consistency, crafting believable dialogue, and driving dynamic storytelling, making it ideal for text-based roleplay, NPC simulation, and interactive fiction applications. The model uses the ChatML instruction format to structure multi-turn conversations with clear role delineation. Basically-Human-4B was fine-tuned on a diverse set of instruction and roleplaying datasets, including various curated and cleaned instruction data from multiple sources. It offers GGUF quantized versions for ease of deployment while delivering compact yet capable performance in roleplay scenarios.

Model Files

File Name Quant Type File Size
Basically-Human-4B.BF16.gguf BF16 8.05 GB
Basically-Human-4B.F16.gguf F16 8.05 GB
Basically-Human-4B.F32.gguf F32 16.1 GB
Basically-Human-4B.Q2_K.gguf Q2_K 1.67 GB
Basically-Human-4B.Q3_K_L.gguf Q3_K_L 2.24 GB
Basically-Human-4B.Q3_K_M.gguf Q3_K_M 2.08 GB
Basically-Human-4B.Q3_K_S.gguf Q3_K_S 1.89 GB
Basically-Human-4B.Q4_K_M.gguf Q4_K_M 2.5 GB
Basically-Human-4B.Q4_K_S.gguf Q4_K_S 2.38 GB
Basically-Human-4B.Q5_K_M.gguf Q5_K_M 2.89 GB
Basically-Human-4B.Q5_K_S.gguf Q5_K_S 2.82 GB
Basically-Human-4B.Q6_K.gguf Q6_K 3.31 GB
Basically-Human-4B.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
472
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Basically-Human-4B-f32-GGUF

Base model

Qwen/Qwen3-4B-Base
Quantized
(3)
this model