Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
免费去水印
Log In
Sign Up
3
92
JangJaewon
Be2Jay
Follow
ginigen-ai's profile picture
hamntood's profile picture
seawolf2357's profile picture
17 followers
·
9 following
jaytoone
AI & ML interests
None yet
Recent Activity
liked
a Space
7 days ago
ginigen-ai/site-agent
liked
a Space
12 days ago
ginigen-ai/smol-worldcup
reacted
to
SeaWolf-AI
's
post
with ❤️
12 days ago
🏟️ Smol AI WorldCup: A 4B Model Just Beat 8B — Here's the Data We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better. Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup Live Leaderboard: https://huggingface.co/spaces/ginigen-ai/smol-worldcup Dataset: https://huggingface.co/datasets/ginigen-ai/smol-worldcup What we found: → Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more. → GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer. → Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower. → A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes. → Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B. What makes this benchmark different? Most benchmarks ask "how smart?" — we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low. Top 5 by WCS: 1. GPT-OSS-20B — WCS 82.6 — 1.5GB — Raspberry Pi tier 2. Gemma-3n-E4B — WCS 81.8 — 2.0GB — Smartphone tier 3. Llama-4-Scout — WCS 79.3 — 240 tok/s — Fastest model 4. Qwen3-4B — WCS 76.6 — 2.8GB — Smartphone tier 5. Qwen3-1.7B — WCS 76.1 — 1.2GB — IoT tier Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison. Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
View all activity
Organizations
spaces
2
Sort: Recently updated
Sleeping
VIDraft Shrimp Detection
🦐
Find and label shrimp in images
Running
1
neuroai
🐳
models
1
Be2Jay/AETHER-Micro-0.5B
Text Generation
•
2B
•
Updated
28 days ago
•
188
datasets
0
None public yet
×
Free Tool
Free AI Image Generator
Create images in seconds. No sign-up, no paywall, no setup.
No Sign-Up
Instant Results
Ready to Use
Create Images Free
Great for posters, avatars, covers, and social visuals.
Free AI Image Generator
No sign-up. Instant results.
Open Now