Qwen2.5-3B-DataFusion-Instruct GGUF Model

Model Overview

Model Name: Qwen2.5-3B-DataFusion-Instruct
Model Type: Fine-tuned Large Language Model
Base Model: Qwen2.5-3B
Specialization: DataFusion SQL Engine and Rust Programming
Format: GGUF (GGML Universal Format)
License: Apache 2.0

Model Description

This is a specialized fine-tuned version of the Qwen2.5-3B model, specifically trained on comprehensive DataFusion ecosystem data to excel at Rust programming, DataFusion SQL queries, and data processing tasks. The model has been optimized to provide accurate, idiomatic code examples and clear technical explanations.

Model Files

Main Model

  • File: model.gguf (5.8GB)
  • Type: Full precision GGUF model
  • Use Case: Production environments, highest accuracy requirements
  • Recommended For: Development, debugging, complex queries

Quantized Model

  • File: qwen2.5-3B-datafusion.gguf (1.8GB)
  • Type: Quantized GGUF model (optimized for inference)
  • Use Case: Resource-constrained environments, faster inference
  • Recommended For: Deployment, testing, resource-limited scenarios

Training Data

Dataset Composition

  • Total QA Pairs: 265,180
  • Source Projects: 36 different repositories
  • Content Types: Code implementation, documentation, usage examples
  • Coverage: Comprehensive DataFusion ecosystem

Training Projects

  • Core DataFusion: datafusion, datafusion-ballista, datafusion-federation
  • DataFusion Extensions: datafusion-functions-json, datafusion-postgres, datafusion-python
  • Arrow Ecosystem: arrow-rs, arrow-zarr
  • Related Tools: blaze, exon, feldera, greptimedb, horaedb, influxdb
  • Modern Data Stack: iceberg-rust, LakeSoul, lance, openobserve, parseable

Data Quality Features

  • Structured JSONL format with source attribution
  • Code examples with best practices and common pitfalls
  • Error handling guidance and troubleshooting solutions
  • Performance optimization tips and best practices

Model Capabilities

Primary Strengths

  1. Rust Programming Expertise

    • Idiomatic Rust code generation
    • DataFusion API usage patterns
    • Error handling and testing best practices
    • Performance optimization techniques
  2. DataFusion SQL Mastery

    • Complex SQL query construction
    • Table provider implementations
    • UDF (User-Defined Function) development
    • Query optimization and execution planning
  3. Data Processing Knowledge

    • Arrow format operations
    • Parquet file handling
    • Data transformation pipelines
    • Streaming and batch processing
  4. System Architecture Understanding

    • Distributed query execution
    • Federation and integration patterns
    • Observability and tracing
    • Performance monitoring

Technical Domains

  • SQL Engine Internals: Query planning, optimization, execution
  • Data Formats: Arrow, Parquet, JSON, CSV, Avro
  • Storage Systems: Object storage, databases, file systems
  • Distributed Computing: Ray, Ballista, cluster management
  • Streaming: Real-time data processing, windowing, aggregations

Usage Instructions

System Prompt

The model is configured with a specialized system prompt:

You are a helpful, concise, and accurate coding assistant specialized in Rust and the DataFusion SQL engine. Always provide high-level, idiomatic Rust code, DataFusion SQL examples, clear documentation, and robust test cases. Your answers should be precise, actionable, and end with '### End'.

Prompt Template

### Instruction:
{{ .Prompt }}

### Response:

Stop Sequences

  • ### Instruction:
  • ### Response:
  • ### End

Generation Parameters

  • num_predict: 1024 (maximum tokens to generate)
  • repeat_penalty: 1.2 (prevents repetitive output)
  • temperature: 0.7 (balanced creativity vs consistency)
  • top_p: 0.9 (nucleus sampling for quality)

Performance Characteristics

Accuracy

  • Code Generation: High accuracy for Rust and DataFusion patterns
  • SQL Queries: Correct syntax and best practices
  • Documentation: Clear, actionable explanations
  • Error Handling: Comprehensive coverage of common issues

Efficiency

  • Main Model: Highest accuracy, larger memory footprint
  • Quantized Model: Optimized inference, reduced memory usage
  • Response Time: Fast generation with proper stop sequences
  • Memory Usage: Efficient token management

Installation and Setup

Ollama (Recommended)

# Pull the model
ollama pull jaro/qwen_datafusion

# Run inference
ollama run jaro/qwen_datafusion

Direct GGUF Usage

# Using llama.cpp or compatible tools
./llama -m model.gguf -p "How do I create a custom UDF in DataFusion?"

Model Comparison

Aspect Main Model (5.8GB) Quantized Model (1.8GB)
Accuracy Highest High (slight degradation)
Memory Usage Higher Lower
Inference Speed Standard Faster
Deployment Development/Production Production/Resource-constrained
Use Case Maximum quality Balanced performance

Resources

Citation

When using this model in research or publications, please cite:

@software{qwen2.5_3b_datafusion_instruct,
  title={Qwen2.5-3B-DataFusion-Instruct: A Specialized Model for DataFusion Ecosystem},
  author={Fine-tuned on DataFusion Ecosystem QA Dataset},
  year={2025},
  url={https://github.com/yarenty/trainer},
  license={Apache-2.0}
}

License

This model is licensed under the Apache 2.0 License. See the LICENSE file for full details.


This model represents a significant advancement in specialized AI assistance for the DataFusion ecosystem, combining the power of large language models with domain-specific expertise in data processing and Rust programming.

Downloads last month
574
GGUF
Model size
3.09B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yarenty/qwen2.5-3B-datafusion-instruct-gguf

Base model

Qwen/Qwen2.5-3B
Quantized
(145)
this model

Dataset used to train yarenty/qwen2.5-3B-datafusion-instruct-gguf