🦙 LLaMA 3.1 (8B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
Model Summary
This is a derivative of Meta AI’s LLaMA 3.1 base model. The model retains the core architecture and weights from Meta’s release and may include fine-tuning, quantization, or adaptation for specific applications.
⚠️ This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.
📝 License & Usage Terms
Meta LLaMA 3 License
Base model is governed by Meta AI's license:
👉 https://ai.meta.com/llama/license/You must agree to their license terms to access and use the weights, which include:
- No commercial use without permission
- Redistribution only allowed under specific conditions
- Attribution required
Redistribution Notice
- This repository does not include original Meta weights.
- You must obtain base weights directly from Meta:
👉 https://huggingface.co/meta-llama
If Fine-tuned
If this model has been fine-tuned, the downstream weights are provided under the following conditions:
- Base Model License: Meta’s LLaMA 3 License
- Derivative Weights License: [e.g., CC-BY-NC-4.0, MIT, custom, etc.]
- Training Dataset License(s):
- [Dataset A] – [license]
- [Dataset B] – [license]
Make sure you have rights to use and distribute any data used in fine-tuning.
Intended Use
- Use Cases: Research, experimentation, academic NLP, code generation (if applicable)
- Not Intended For: Use in production systems without further evaluation, sensitive applications, or commercial deployments without Meta’s explicit permission
Limitations & Risks
- May generate incorrect or harmful content
- Does not have knowledge past its training cutoff
- Biases in training data may persist
Citation
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
}
- Downloads last month
- 52
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for FastFlowLM/Llama-3.1-8B-NPU2
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct