How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "codewithdark/deepmath-7b-m" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "codewithdark/deepmath-7b-m",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "codewithdark/deepmath-7b-m" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "codewithdark/deepmath-7b-m",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

DeepMath-7B-M

Model Overview

DeepMath-7B-M is a fine-tuned version of DeepSeek-R1-Distill-Qwen-1.5B on the GSM8K dataset. This model is designed for mathematical reasoning and problem-solving, excelling in arithmetic, algebra, and word problems.

Model Details

  • Base Model: DeepSeek-R1-Distill-Qwen-1.5B
  • Fine-Tuning Dataset: GSM8K
  • Parameters: 1.5 Billion
  • Task: Mathematical Question Answering (Math QA)
  • Repository: codewithdark/deepmath-7b-m
  • Commit Message: "Full merged model for math QA"

Training Details

  • Dataset: GSM8K (Grade School Math 8K) - a high-quality dataset for mathematical reasoning
  • Fine-Tuning Framework: Hugging Face Transformers & PyTorch
  • Optimization Techniques:
    • AdamW Optimizer
    • Learning rate scheduling
    • Gradient accumulation
    • Mixed precision training (FP16)
  • Training Steps: Multiple epochs on a high-performance GPU cluster

Capabilities & Performance

DeepMath-7B-M excels in:

  • Solving word problems with step-by-step reasoning
  • Performing algebraic and arithmetic computations
  • Understanding complex problem structures
  • Generating structured solutions with explanations

Usage

You can load and use the model via the Hugging Face transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("codewithdark/deepmath-7b-m")
model = AutoModelForCausalLM.from_pretrained("codewithdark/deepmath-7b-m")

input_text = "A farmer has 5 chickens and each lays 3 eggs a day. How many eggs in total after a week?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • May struggle with extremely complex mathematical proofs
  • Performance is limited to the scope of GSM8K-type problems
  • Potential biases in training data

Future Work

  • Extending training to more diverse math datasets
  • Exploring larger models for improved accuracy
  • Fine-tuning on physics and higher-level mathematical reasoning datasets

License

This model is released under the mit License.

Citation

If you use this model, please cite:

@misc{DeepMath-7B-M,
  author = {Ahsan},
  title = {DeepMath-7B-M: Fine-Tuned DeepSeek-R1-Distill-Qwen-1.5B on GSM8K},
  year = {2025},
  url = {https://huggingface.co/codewithdark/deepmath-7b-m}
}
Downloads last month
3
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for codewithdark/deepmath-7b-m

Finetuned
(636)
this model

Dataset used to train codewithdark/deepmath-7b-m

Free AI Image Generator No sign-up. Instant results. Open Now