File size: 6,683 Bytes
			
			db716ba 436985b db716ba d99ddab  | 
								1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210  | 
								---
language:
- en
license: mit
library_name: transformers
tags:
- robotics
- isaac-sim
- code-generation
- simulation
- qwen2
- causal-lm
- text-generation
- text2text-generation
- omni
- nvidia
- robotics-simulation
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
---

# Isaac Sim Robotics Qwen2.5-Coder-7B-Instruct-Omni1.1
[](https://huggingface.co/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1)
[](LICENSE)
[](https://docs.omniverse.nvidia.com/isaacsim/)
A specialized fine-tuned Qwen2.5-Coder-7B-Instruct model optimized for Isaac Sim 5.0 robotics development, computer vision, and simulation tasks.
## 🚀 Quick Start
### Option 1: HuggingFace Transformers (Recommended)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Isaac Sim robotics query
query = """<|im_start|>user
How do I create a robot with differential drive in Isaac Sim 5.0?
<|im_end|>
<|im_start|>assistant"""
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Option 2: CTransformers (Lightweight)
```python
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
    "TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1",
    model_type="qwen2",
    gpu_layers=0  # CPU inference
)
# Same usage pattern as above
```
### Option 3: GGUF Conversion (Advanced)
```bash
# Convert to GGUF format for llama.cpp
python scripts/convert_to_gguf.py
# Use with llama.cpp
./llama-server --model models/gguf/isaac_sim_qwen2.5_coder_q4_0.gguf --port 8080
```
## 🎯 Model Capabilities
- **Isaac Sim 5.0 Expertise**: Deep knowledge of robotics simulation APIs
- **Computer Vision**: Understanding of sensor integration and perception
- **Robot Control**: Programming differential drive, manipulators, and sensors
- **Simulation Setup**: Environment configuration and physics parameters
- **Code Generation**: Python scripts for Isaac Sim workflows
- **Troubleshooting**: Common issues and solutions
## 📊 Performance
- **Base Model**: Qwen2.5-Coder-7B-Instruct
- **Training Data**: 2,000 Isaac Sim-specific examples
- **Training Method**: LoRA fine-tuning (rank 64, alpha 128)
- **Hardware**: NVIDIA RTX 4070 Laptop GPU (8.5GB VRAM)
- **Training Steps**: 300 with curriculum learning
## 🔧 Installation
```bash
# Clone repository
git clone https://github.com/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1.git
cd Qwen2.5-Coder-7B-Instruct-Omni1.1
# Install dependencies
pip install -r requirements.txt
# Download models (choose one)
# Option 1: HuggingFace (5.3GB)
huggingface-cli download TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1 --local-dir models/huggingface
# Option 2: CTransformers (5.2GB)
huggingface-cli download TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1 --local-dir models/ctransformers
# Option 3: GGUF (616MB + conversion)
huggingface-cli download TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1 --local-dir models/gguf
```
## 📚 Examples
### Isaac Sim Robot Creation
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1")
tokenizer = AutoTokenizer.from_pretrained("TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1")
query = """<|im_start|>user
Create a Python script to spawn a UR5 robot in Isaac Sim 5.0 with proper physics properties.
<|im_end|>
<|im_start|>assistant"""
# Generate response
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_length=1024, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Sensor Integration
```python
query = """<|im_start|>user
How do I add a depth camera to my robot and process the depth data in Isaac Sim?
<|im_end|>
<|im_start|>assistant"""
```
## ⚠️ Known Limitations
### GGUF Conversion Issues
The GGUF conversion currently has metadata compatibility issues:
- **Error**: Missing `qwen2.context_length` field
- **Workaround**: Use HuggingFace or CTransformers formats
- **Status**: Under investigation for future updates
### Hardware Requirements
- **HuggingFace**: 8GB+ VRAM for full precision
- **CTransformers**: 4GB+ VRAM for optimized inference
- **GGUF**: 2GB+ VRAM (when conversion is fixed)
## 🛠️ Troubleshooting
### Common Issues
1. **Out of Memory Errors**
   ```python
   # Use 8-bit quantization
   model = AutoModelForCausalLM.from_pretrained(
       "TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1",
       load_in_8bit=True,
       device_map="auto"
   )
   ```
2. **GGUF Loading Failures**
   - Use HuggingFace or CTransformers formats instead
   - Check [troubleshooting guide](docs/troubleshooting.md)
3. **Isaac Sim Integration Issues**
   - Ensure Isaac Sim 5.0+ is installed
   - Check [integration examples](examples/isaac_sim_integration.py)
## 📖 Documentation
- [Model Card](model_card.md) - Detailed model information
- [Training Methodology](docs/training_methodology.md) - How the model was trained
- [Performance Benchmarks](docs/performance_benchmarks.md) - Evaluation results
- [Troubleshooting Guide](docs/troubleshooting.md) - Common issues and solutions
## 🤝 Contributing
We welcome contributions! Please see our [contributing guidelines](CONTRIBUTING.md) for details.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- **NVIDIA Isaac Sim Team** for the simulation platform
- **Qwen Team** for the base model
- **Hugging Face** for the training infrastructure
- **Open Source Community** for tools and libraries
## 📞 Support
- **Issues**: [GitHub Issues](https://github.com/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1/issues)
- **Discussions**: [GitHub Discussions](https://github.com/TomBombadyl/Qwen2.5-Coder-7B-Instruct-Omni1.1/discussions)
- **Documentation**: [Full Documentation](docs/)
---
**Note**: This model is specifically trained for Isaac Sim 5.0 robotics development. For general coding tasks, consider using the base Qwen2.5-Coder-7B-Instruct model.  |