File size: 2,196 Bytes
4dea89f 4f7e510 4dea89f 4f7e510 d0c7f84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.2
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# 🦙 LLaMA 3.2 (1B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This model is a variant of Meta AI’s **LLaMA 3.2 1B Instruct** release. It preserves the original architecture and weights, with potential optimizations via quantization, low-level tuning, or runtime enhancements tailored for NPUs using FastFlowLM.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Governed by Meta AI's LLaMA 3 license:
👉 https://ai.meta.com/llama/license/
- Key restrictions include:
- **No commercial use** without express permission from Meta
- Redistribution must follow Meta’s guidelines
- Attribution to Meta is required
### Redistribution Notice
- This repository does **not** contain Meta’s original weights.
- You must obtain the base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this version includes any fine-tuning or post-training modification:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Users are responsible for verifying the legality of dataset use and redistribution.
## Intended Use
- **Target Applications**: On-device experimentation, local LLM inference, academic research
- **Exclusions**: Do **not** use in commercial products, production systems, or critical tasks without proper evaluation and license compliance
## Limitations & Risks
- May hallucinate or output biased content
- Knowledge is frozen as of the base model's training cutoff
- Not evaluated for high-stakes or real-time applications
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
``` |