YOLOv12-N
Model Description
YOLOv12-N is a variant of the 12th-generation YOLO (You Only Look Once) real-time object detector.
It builds on prior YOLO models with improved backbone/neck architectures, updated training strategies, and optimizations for both high-performance GPUs and edge devices.
Features
- Real-time object detection optimized for low-latency inference.
- High accuracy across diverse categories and challenging environments.
- Lightweight variants suitable for mobile and embedded deployment.
- Scalable: runs from smartphones to multi-GPU servers.
- Extensible: fine-tuning supported for domain-specific datasets.
Use Cases
- Autonomous driving and ADAS (Advanced Driver Assistance Systems)
- Surveillance and security monitoring
- Industrial automation and defect detection
- Retail analytics and inventory monitoring
- Sports analytics and event detection
Inputs and Outputs
Input:
- RGB images or video frames (any resolution; auto-resized during preprocessing).
Output:
- Bounding boxes
(x, y, w, h)
- Class labels
- Confidence scores
How to use
⚠️ Hardware requirement: the model currently runs only on Qualcomm NPUs (e.g., Snapdragon-powered AIPC).
Apple NPU support is planned next.
1) Install Nexa-SDK
2) Get an access token
Create a token in the Model Hub, then log in:
nexa config set license '<access_token>'
3) Run the model
Running:
nexa infer NexaAI/yolov12-npu
License
- Licensed under AGPL-3.0 (same as Ultralytics YOLO).
References