nextai-team's picture
Update README.md
c1c2d28 verified
|
raw
history blame
1.79 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - code
  - QA
  - reasoning

Model Card for Model ID

Model Details

Model Description

A powerfull MOE 4x7b mixtral of mistral models build using HuggingFaceH4/zephyr-7b-beta, mistralai/Mistral-7B-Instruct-v0.2, teknium/OpenHermes-2.5-Mistral-7B, Intel/neural-chat-7b-v3-3 for more accuracy and precision in general reasoning, QA and code.

  • Developed by: NEXT AI
  • Funded by : Zpay Labs Pvt Ltd.
  • Model type: Mixtral of Mistral 4x7b
  • Language(s) (NLP): Code-Reasoning-QA

Model Sources

https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 https://huggingface.co/Intel/neural-chat-7b-v3-3 https://huggingface.co/HuggingFaceH4/zephyr-7b-beta https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B

Instructions to run the model

from transformers import AutoTokenizer import transformers import torch

model = "nextai-team/Moe-4x7b-reason-code-qa"

tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, )

def generate_resposne(query): messages = [{"role": "user", "content": query}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) return outputs[0]['generated_text']

response = generate_resposne("How to start learning GenAI") print(response)

  • Demo : Https://nextai.co.in