✨ Model overview

HyperNova 60B is a large language model developed by Multiverse Computing with a focus on compute efficiency and deployability.

The model is designed to provide strong reasoning and text generation capabilities while significantly reducing compute and memory requirements compared to conventional large-scale language models.

HyperNova 60B is intended for real-world deployment scenarios where cost, latency, and infrastructure constraints are critical, enabling high-performance inference without requiring frontier-scale hardware.

🚀 Architecture

HyperNova 60B base architecture is gpt-oss-120b.

  • 59B parameters with 4.8B active parameters
  • MXFP4 quantization
  • Configurable reasoning effort (low, medium, high)
  • GPU usage of less than 40GB

For Inference examples, please refer to base model model card gpt-oss-120b.

Evaluation & Performance

HyperNova 60B has been evaluated and compared against other SoTA models on general reasoning benchmarks using lighteval>=0.12.0, following Artificial Analysis Intelligence Benchmarking Methodology. The results shown refer to reasoning_effort = medium.

Accuracy

Intended Use

HyperNova 60B is a general-purpose reasoning and conversational model designed for use in English and programming languages. It also supports several non-English languages, including German, French, Italian, Spanish, and Japanese. The model is well suited for developers building AI agent systems, chatbots, RAG pipelines, and other AI-powered applications, as well as for standard instruction-following tasks.

Model Release Date

January 2, 2026.

Safety & Responsible Use

HyperNova 60B has been developed using a novel compression technology, and as with any emerging technology, its use involves inherent risks. While testing has been performed, it cannot encompass every possible scenario or use case. Multiverse Computing is committed to continuously evaluating, improving, and responsibly deploying HyperNova 60B, and encourages users to apply appropriate safeguards and judgment when integrating the model into their applications.

License

This model is licensed under the Apache 2.0 License.

Downloads last month
21
Safetensors
Model size
60B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MultiverseComputingCAI/HyperNova-60B

Quantized
(63)
this model