File size: 2,213 Bytes
ce12c1e c244e0c ce12c1e c244e0c ce12c1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.1
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# 🦙 LLaMA 3.1 (8B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This is a derivative of Meta AI’s LLaMA 3.1 base model. The model retains the core architecture and weights from Meta’s release and may include fine-tuning, quantization, or adaptation for specific applications.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Base model is governed by Meta AI's license:
👉 https://ai.meta.com/llama/license/
- You must agree to their license terms to access and use the weights, which include:
- No commercial use without permission
- Redistribution only allowed under specific conditions
- Attribution required
### Redistribution Notice
- This repository does **not** include original Meta weights.
- You must obtain base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this model has been fine-tuned, the downstream weights are provided under the following conditions:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom, etc.]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Make sure you have rights to use and distribute any data used in fine-tuning.
## Intended Use
- **Use Cases**: Research, experimentation, academic NLP, code generation (if applicable)
- **Not Intended For**: Use in production systems without further evaluation, sensitive applications, or commercial deployments without Meta’s explicit permission
## Limitations & Risks
- May generate incorrect or harmful content
- Does not have knowledge past its training cutoff
- Biases in training data may persist
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
} |