|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
tags: |
|
- llama |
|
- llama-3.1 |
|
- text-generation |
|
- AMD |
|
- Ryzen |
|
- NPU |
|
pipeline_tag: text-generation |
|
base_model: |
|
- meta-llama/Llama-3.1-8B-Instruct |
|
--- |
|
|
|
# 🦙 LLaMA 3.1 (8B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only) |
|
|
|
## Model Summary |
|
This is a derivative of Meta AI’s LLaMA 3.1 base model. The model retains the core architecture and weights from Meta’s release and may include fine-tuning, quantization, or adaptation for specific applications. |
|
|
|
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.** |
|
|
|
## 📝 License & Usage Terms |
|
|
|
### Meta LLaMA 3 License |
|
- Base model is governed by Meta AI's license: |
|
👉 https://ai.meta.com/llama/license/ |
|
|
|
- You must agree to their license terms to access and use the weights, which include: |
|
- No commercial use without permission |
|
- Redistribution only allowed under specific conditions |
|
- Attribution required |
|
|
|
### Redistribution Notice |
|
- This repository does **not** include original Meta weights. |
|
- You must obtain base weights directly from Meta: |
|
👉 https://huggingface.co/meta-llama |
|
|
|
### If Fine-tuned |
|
If this model has been fine-tuned, the downstream weights are provided under the following conditions: |
|
|
|
- **Base Model License**: Meta’s LLaMA 3 License |
|
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom, etc.] |
|
- **Training Dataset License(s)**: |
|
- [Dataset A] – [license] |
|
- [Dataset B] – [license] |
|
|
|
Make sure you have rights to use and distribute any data used in fine-tuning. |
|
|
|
## Intended Use |
|
- **Use Cases**: Research, experimentation, academic NLP, code generation (if applicable) |
|
- **Not Intended For**: Use in production systems without further evaluation, sensitive applications, or commercial deployments without Meta’s explicit permission |
|
|
|
## Limitations & Risks |
|
- May generate incorrect or harmful content |
|
- Does not have knowledge past its training cutoff |
|
- Biases in training data may persist |
|
|
|
## Citation |
|
```bibtex |
|
@misc{touvron2024llama3, |
|
title={LLaMA 3: Open Foundation and Instruction Models}, |
|
author={Touvron, Hugo and others}, |
|
year={2024}, |
|
url={https://ai.meta.com/llama/} |
|
} |