File size: 7,535 Bytes
29f9bcb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# Model Card for Isaac Sim Robotics Qwen2.5-Coder-7B-Instruct
## Model Details
### Model Description
- **Model Type**: Fine-tuned causal language model
- **Base Model**: Qwen/Qwen2.5-Coder-7B-Instruct
- **Architecture**: Qwen2 architecture with 7B parameters
- **Training Method**: LoRA (Low-Rank Adaptation) fine-tuning
- **License**: MIT License
- **Repository**: [Qwen2.5-Coder-7B-Instruct-Omni1.1](https://huggingface.co/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1)
### Intended Use
This model is specifically designed for Isaac Sim 5.0 robotics development tasks, including:
- Robot simulation setup and configuration
- Computer vision and sensor integration
- Robot control programming
- Simulation environment design
- Troubleshooting Isaac Sim issues
- Code generation for robotics workflows
### Training Data
- **Source**: Isaac Sim 5.0 Synthetic Dataset
- **Total Samples**: 2,000 carefully curated examples
- **Training Split**: 1,800 training, 200 evaluation
- **Data Types**:
- Robot creation and configuration
- Sensor setup and data processing
- Physics parameter tuning
- Environment design
- Troubleshooting scenarios
- **Curriculum Learning**: Applied (sorted by output length)
### Training Configuration
- **LoRA Rank**: 64
- **LoRA Alpha**: 128
- **Learning Rate**: 1e-05
- **Batch Size**: 1
- **Gradient Accumulation Steps**: 8
- **Max Training Steps**: 300
- **Warmup Steps Ratio**: 0.03
- **Optimizer**: AdamW
- **Scheduler**: Linear with warmup
### Hardware Requirements
- **Training GPU**: NVIDIA GeForce RTX 4070 Laptop GPU
- **VRAM**: 8.5GB
- **Inference Requirements**:
- **HuggingFace**: 8GB+ VRAM (full precision)
- **CTransformers**: 4GB+ VRAM (optimized)
- **GGUF**: 2GB+ VRAM (when conversion is fixed)
## Performance
### Evaluation Metrics
- **Training Loss**: Converged after 300 steps
- **Domain Accuracy**: Specialized for Isaac Sim robotics
- **Code Quality**: Generated code follows Isaac Sim best practices
- **Response Relevance**: High relevance to robotics queries
### Limitations
1. **Domain Specificity**: Limited to Isaac Sim robotics context
2. **GGUF Conversion**: Currently has metadata compatibility issues
3. **Hardware Requirements**: Requires significant VRAM for full precision
4. **Training Data Size**: Limited to 2,000 examples
### Known Issues
- **GGUF Loading Error**: Missing `qwen2.context_length` metadata field
- **Workaround**: Use HuggingFace or CTransformers formats
- **Status**: Under investigation for future updates
## Usage
### Input Format
The model expects Isaac Sim-specific queries in the following format:
```
<|im_start|>user
[Your Isaac Sim robotics question here]
<|im_end|>
<|im_start|>assistant
```
### Example Queries
1. **Robot Creation**: "How do I create a differential drive robot in Isaac Sim?"
2. **Sensor Setup**: "How to add a depth camera and process depth data?"
3. **Physics Configuration**: "What physics parameters should I use for a manipulator?"
4. **Environment Design**: "How to create a warehouse environment with obstacles?"
5. **Troubleshooting**: "Why is my robot falling through the ground?"
### Output Characteristics
- **Code Generation**: Python scripts ready for Isaac Sim
- **Explanation**: Detailed step-by-step instructions
- **Best Practices**: Follows Isaac Sim development guidelines
- **Error Prevention**: Includes common pitfalls and solutions
## Model Variants
### 1. HuggingFace Format (Primary)
- **Location**: `models/huggingface/`
- **Size**: 5.3GB
- **Format**: Standard HuggingFace model files
- **Usage**: Direct integration with transformers library
- **Advantages**: Full compatibility, easy integration
### 2. CTransformers Format (Alternative)
- **Location**: `models/ctransformers/`
- **Size**: 5.2GB
- **Format**: Optimized for CTransformers library
- **Usage**: Lightweight inference with reduced memory
- **Advantages**: Lower memory usage, faster inference
### 3. GGUF Format (Experimental)
- **Location**: `models/gguf/`
- **Size**: 616MB (base) + quantization variants
- **Format**: llama.cpp compatible
- **Usage**: Server deployment and edge inference
- **Status**: Metadata issues, conversion scripts provided
## Ethical Considerations
### Bias and Fairness
- **Training Data**: Focused on technical robotics content
- **Domain Limitation**: May not generalize to other robotics platforms
- **Cultural Bias**: Minimal, focused on technical accuracy
### Safety
- **Content Filtering**: No additional safety filters applied
- **Use Case**: Intended for robotics development only
- **Misuse Prevention**: Technical domain limits potential misuse
### Privacy
- **Training Data**: Synthetic data, no personal information
- **Inference**: No data collection or logging
- **Compliance**: Follows standard AI model privacy practices
## Technical Specifications
### Model Architecture
- **Base**: Qwen2.5-Coder-7B-Instruct
- **Parameters**: 7 billion
- **Context Length**: 32,768 tokens
- **Vocabulary**: 151,936 tokens
- **Embedding Dimension**: 4,096
- **Attention Heads**: 32
- **Layers**: 32
### Quantization Support
- **FP16**: Full precision (default)
- **INT8**: 8-bit quantization support
- **INT4**: 4-bit quantization (experimental)
- **GGUF**: Conversion scripts provided
### Integration
- **HuggingFace**: Native support
- **Isaac Sim**: Direct Python integration
- **CTransformers**: Optimized inference
- **llama.cpp**: When GGUF issues resolved
## Deployment
### Local Development
```bash
# Clone repository
git clone https://github.com/your-username/isaac-sim-robotics-qwen.git
cd isaac-sim-robotics-qwen
# Install dependencies
pip install -r requirements.txt
# Download model
huggingface-cli download your-username/isaac-sim-robotics-qwen
```
### Production Deployment
- **HuggingFace Hub**: Direct model hosting
- **Docker**: Containerized deployment
- **API Server**: RESTful inference endpoints
- **Edge Deployment**: GGUF format (when fixed)
## Maintenance
### Updates
- **Training Data**: Expandable dataset for future versions
- **Model Architecture**: Base model updates as available
- **Bug Fixes**: Regular repository updates
- **Community**: Open source maintenance
### Support
- **Documentation**: Comprehensive guides and examples
- **Issues**: GitHub issue tracking
- **Discussions**: Community support forum
- **Examples**: Working code samples
## Citation
If you use this model in your research or development, please cite:
```bibtex
@misc{qwen2.5_coder_7b_instruct_omni1.1,
title={Qwen2.5-Coder-7B-Instruct-Omni1.1: Isaac Sim Robotics Specialized Model},
author={TomBombadyl},
year={2025},
url={https://huggingface.co/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1}
}
```
## License
This model is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
- **Repository**: [Hugging Face Hub](https://huggingface.co/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1)
- **Issues**: [GitHub Issues](https://github.com/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1/issues)
- **Discussions**: [GitHub Discussions](https://github.com/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1/discussions)
---
**Note**: This model is specifically trained for Isaac Sim 5.0 robotics development. For general coding tasks, consider using the base Qwen2.5-Coder-7B-Instruct model. |