Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
VoxCPM
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
26
Follow
AWS Inferentia and Trainium
135
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
565
6f20ccf
optimum-neuron-cache
/
neuronxcc-2.13.66.0+6dfecc895
/
0_REGISTRY
/
0.0.22
/
inference
/
llama
58.5 kB
3 contributors
History:
74 commits
jburtoft
Synchronizing local compiler cache.
f77ce95
verified
over 1 year ago
01-ai
Synchronizing local compiler cache.
over 1 year ago
HuggingFaceTB
Synchronizing local compiler cache.
over 1 year ago
LargeWorldModel
Synchronizing local compiler cache.
over 1 year ago
NousResearch
Synchronizing local compiler cache.
over 1 year ago
abacusai
Synchronizing local compiler cache.
over 1 year ago
defog
Synchronizing local compiler cache.
over 1 year ago
elyza
Synchronizing local compiler cache.
over 1 year ago
gorilla-llm
Synchronizing local compiler cache.
over 1 year ago
ibm
Synchronizing local compiler cache.
over 1 year ago
llm-jp
Synchronizing local compiler cache.
over 1 year ago
m-a-p
Synchronizing local compiler cache.
over 1 year ago
meta-llama
Remove cached bf16 Llama3 entry
over 1 year ago