|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
tags: |
|
- llama |
|
- llama-3.2 |
|
- text-generation |
|
- AMD |
|
- Ryzen |
|
- NPU |
|
pipeline_tag: text-generation |
|
base_model: |
|
- meta-llama/Llama-3.2-3B-Instruct |
|
--- |
|
|
|
# 🦙 LLaMA 3.2 (3B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only) |
|
|
|
## Model Summary |
|
This model is a variant of Meta AI’s **LLaMA 3.2 3B Instruct** release. It preserves the original architecture and weights, with potential optimizations via quantization, low-level tuning, or runtime enhancements tailored for NPUs using FastFlowLM. |
|
|
|
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.** |
|
|
|
## 📝 License & Usage Terms |
|
|
|
### Meta LLaMA 3 License |
|
- Governed by Meta AI's LLaMA 3 license: |
|
👉 https://ai.meta.com/llama/license/ |
|
|
|
- Key restrictions include: |
|
- **No commercial use** without express permission from Meta |
|
- Redistribution must follow Meta’s guidelines |
|
- Attribution to Meta is required |
|
|
|
### Redistribution Notice |
|
- This repository does **not** contain Meta’s original weights. |
|
- You must obtain the base weights directly from Meta: |
|
👉 https://huggingface.co/meta-llama |
|
|
|
### If Fine-tuned |
|
If this version includes any fine-tuning or post-training modification: |
|
|
|
- **Base Model License**: Meta’s LLaMA 3 License |
|
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom] |
|
- **Training Dataset License(s)**: |
|
- [Dataset A] – [license] |
|
- [Dataset B] – [license] |
|
|
|
Users are responsible for verifying the legality of dataset use and redistribution. |
|
|
|
## Intended Use |
|
- **Target Applications**: On-device experimentation, local LLM inference, academic research |
|
- **Exclusions**: Do **not** use in commercial products, production systems, or critical tasks without proper evaluation and license compliance |
|
|
|
## Limitations & Risks |
|
- May hallucinate or output biased content |
|
- Knowledge is frozen as of the base model's training cutoff |
|
- Not evaluated for high-stakes or real-time applications |
|
|
|
## Citation |
|
```bibtex |
|
@misc{touvron2024llama3, |
|
title={LLaMA 3: Open Foundation and Instruction Models}, |
|
author={Touvron, Hugo and others}, |
|
year={2024}, |
|
url={https://ai.meta.com/llama/} |
|
``` |