·
AI & ML interests
NLP | CV | LLM
Recent Activity
reacted
to
SeaWolf-AI's
post with 👍 4 days ago 🚀 Introducing MARL — Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning
Now available on PyPI · GitHub · ClawHub · HuggingFace
AI models sense they could be wrong, but they can't actually fix what's broken.
🤗 Live A/B test: https://huggingface.co/spaces/VIDraft/MARL
We evaluated 9 SOTA models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, etc.) across 1,800 assessments in FINAL Bench and found a 39.2%p gap between "recognizing potential errors (MA=0.694)" and "actually finding and fixing them (ER=0.302)."
MARL (Model-Agnostic Runtime Middleware for LLMs) was built to close this metacognitive gap. It decomposes a single LLM call into a 5-stage expert pipeline (Hypothesis → Solver → Auditor → Adversarial Verifier → Synthesizer), transforming "answer in one shot" into "think, doubt, correct, and rewrite."
No weight modification — works instantly with GPT-5.4, Claude, Gemini, Llama, or any OpenAI API-compatible LLM by changing one line: base_url. Ships with 9 domain-specific emergence engines (invention, pharma, genomics, chemistry, ecology, law, and more — 5,538 expert data items) activated by a simple tag like model="gpt-5.4::pharma".
pip install marl-middleware
MARL is also officially registered on ClawHub, the skill marketplace of OpenClaw — an AI agent platform with 260K+ developers and 3,200+ skills. It's the first middleware in the Reasoning Enhancement category. One command — clawhub install marl-middleware — gives your AI agent a metacognition upgrade.
📝 Technical deep dive: https://huggingface.co/blog/FINAL-Bench/marl-middleware
📦 PyPI: https://pypi.org/project/marl-middleware/
🐙 GitHub: https://github.com/Vidraft/MARL
🦀 ClawHub: https://clawhub.ai/Cutechicken99/marl-middleware
#MARL #LLM #Hallucination #Metacognition #MultiAgent #AIMiddleware #FINALBench #OpenClaw #ClawHub #PyPI #AGI #HuggingFace #ReasoningAI #SelfCorrection #GlassBoxAI View all activity
Organizations
kimleang123/Gemma-SEA-LION-v4-27B-IT-bnb-4bit
Text Generation
• 27B • Updated
• 4
kimleang123/bge-m3-reranker-onnx
kimleang123/Qwen2.5-14B-bnb-4bit
Text Generation
• 4B • Updated
kimleang123/Qwen2.5-72B-Instruct-bnb-4bit
Text Generation
• 21B • Updated
• 2
kimleang123/Mixtral-8x7B-v0.1-bnb-4bit
13B • Updated
• 4
kimleang123/QwQ-32B-bnb-4bit
Text Generation
• 10B • Updated
• 3
kimleang123/Mistral-Nemo-Base-2407-bnb-4bit
12B • Updated
• 5
kimleang123/Qwen2.5-72B-Instruct-bnb
Text Generation
• 21B • Updated
kimleang123/sft-100-full-tuned-qwen2.5-0.5b
0.5B • Updated
• 1
kimleang123/fine-tuned-KQA-gemma-7B-QLora-64-128
Text Generation
• 9B • Updated
kimleang123/full-fine-tuned-KQA-gemma2-2B
Text Generation
• 3B • Updated
kimleang123/fine-tuned-KQA-mistral-7B-v0.3-lora-128-256
Text Generation
• 7B • Updated
kimleang123/fine-tuned-KQA-qwen2-7B-QLoRA
Text Generation
• 8B • Updated
kimleang123/fine-tuned-KQA-qwen2-7B-lora-128-256
Text Generation
• 8B • Updated
kimleang123/full-fine-tuned-KQA-qwen2-0.5B
Text Generation
• 0.5B • Updated
kimleang123/full-fine-tuned-KQA-qwen2-1.5B
Text Generation
• 2B • Updated
kimleang123/fine-tuned-KQA-mistral-7B-v3-QLoRA
Text Generation
• 7B • Updated
kimleang123/fine-tuned-KQA-seallm-7B-v2.5
Text Generation
• 9B • Updated