Axion-Thinking-4B / README.md
AdvRahul's picture
Update README.md
f373f68 verified
metadata
license: other
base_model: Qwen/Qwen3-4B-Thinking-2507
tags:
  - qwen
  - qwen3
  - thinking
  - chain-of-thought
  - safety-tuned
  - instruction-tuned
  - fine-tuned
  - axion

AdvRahul/Axion-Thinking-4B

A safety-enhanced model with transparent reasoning capabilities, based on Qwen3-4B-Thinking-2507. 💡

Axion-Thinking-4B is a fine-tuned version of the powerful Qwen/Qwen3-4B-Thinking-2507 model, designed to provide both state-of-the-art reasoning and enhanced safety. This model excels at complex tasks by first generating a step-by-step "thought process" before delivering a final answer.

🚀 Model Details

  • Model Creator: AdvRahul
  • Base Model: Qwen/Qwen3-4B-Thinking-2507
  • Fine-tuning Focus: Enhanced Safety & Harmlessness
  • Special Feature: Explicit Chain-of-Thought (CoT) Reasoning via an automatic <think> process.
  • Architecture: Qwen3
  • Context Length: 262,144 tokens
  • License: Tongyi Qianwen LICENSE AGREEMENT

📝 Model Description

Transparent Reasoning Meets Enhanced Safety

Axion-Thinking-4B combines two powerful features for building advanced and trustworthy AI applications:

  1. Transparent Reasoning: This model is a "Thinking" model. When given a complex prompt, it first generates its step-by-step thought process. This chain-of-thought, which concludes with a </think> tag, is invaluable for debugging, understanding the model's logic, and verifying its reasoning path.
  2. Enhanced Safety: On top of this powerful reasoning ability, Axion-Thinking-4B has undergone extensive red-team testing with advanced protocols. This fine-tuning improves its safety alignment, reducing the generation of harmful, biased, or inappropriate content in both its thought process and final answer.

This makes the model an excellent choice for applications where both high performance on complex tasks and a high degree of safety and transparency are required.


💻 How to Use

Using this model requires a specific step to parse its output, separating the thought process from the final answer.

Quickstart with transformers

The following code demonstrates how to run the model and correctly parse its unique output structure.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Your safety-tuned model name
model_name = "AdvRahul/Axion-Thinking-4B"

# Load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# Prepare the model input
prompt = "A bat and a ball cost ₹110 in total. The bat costs ₹100 more than the ball. How much does the ball cost?"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# Conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

# === CRUCIAL: Parse the 'thinking' content ===
try:
    # Find the index of the closing </think> token (ID: 151668)
    think_end_index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    # If the token isn't found, assume no thinking content
    think_end_index = 0

# Decode the thinking part and the final answer separately
thinking_content = tokenizer.decode(output_ids[:think_end_index], skip_special_tokens=True).strip()
final_content = tokenizer.decode(output_ids[think_end_index:], skip_special_tokens=True).strip()

print("🤔 Thinking Content:\n", thinking_content)
print("\n✅ Final Content:\n", final_content)

Best Practices for Multi-Turn Chat
Important: In multi-turn conversations, the historical model output fed back into the prompt should only include the final answer, not the thinking content. The official chat template is designed to handle this, but developers using custom frameworks must ensure this practice is followed to maintain conversation quality.

⚠️ Ethical Considerations and Limitations
Safety-Tuned, Not Perfect: While this model is fine-tuned for safety, no AI is completely free from risk. Developers must implement their own safety layers and content moderation.

Monitor the Thoughts: The transparent reasoning process is a powerful feature but also requires monitoring. The "thinking" content should be subject to the same safety and content policies as the final output.

Inherited Biases: The model may still reflect biases from the base model's training data. The ability to inspect the chain-of-thought can help in identifying and mitigating such biases.