|
|
--- |
|
|
license: llama3 |
|
|
--- |
|
|
|
|
|
## Overview |
|
|
|
|
|
Meta Llama3 model |
|
|
|
|
|
## Variants |
|
|
|
|
|
| No | Variant | |
|
|
| --- | --- | |
|
|
| 1 | [8B-onnx](https://huggingface.co/cortexhub/llama3/tree/8B-onnx) | |
|
|
| 2 | [8B-tensorrtllm-linux-ada](https://huggingface.co/cortexhub/llama3/tree/8B-tensorrtllm-linux-ada) | |
|
|
| 3 | [8B-tensorrtllm-windows-ada](https://huggingface.co/cortexhub/llama3/tree/8B-tensorrtllm-windows-ada) | |
|
|
| 4 | [8B-gguf](https://huggingface.co/cortexhub/llama3/tree/8B-gguf) | |
|
|
|
|
|
## Use it with Jan (UI) |
|
|
|
|
|
1. Install Jan using [Quickstart](https://jan.ai/docs/quickstart) |
|
|
2. Use `cortexhub/llama3` in Jan model Hub |
|
|
|
|
|
## Use it with Cortex (CLI) |
|
|
|
|
|
1. Install Cortex using [Quickstart](https://cortex.jan.ai/docs/quickstart) |
|
|
2. Run the model with the command: `cortex run llama3` |
|
|
|
|
|
## Credits |
|
|
|
|
|
- **Author:** meta-llama |
|
|
- **Converter:** [Homebrew](https://www.homebrew.ltd/) |
|
|
- **Original License:** [Licence](https://llama.meta.com/llama3/license/) |
|
|
- **Papers:** N/A |