|  | --- | 
					
						
						|  | license: apache-2.0 | 
					
						
						|  | datasets: | 
					
						
						|  | - cerebras/SlimPajama-627B | 
					
						
						|  | - bigcode/starcoderdata | 
					
						
						|  | - OpenAssistant/oasst_top1_2023-08-25 | 
					
						
						|  | language: | 
					
						
						|  | - en | 
					
						
						|  | tags: | 
					
						
						|  | - GGUF | 
					
						
						|  | - llamafile | 
					
						
						|  | model_creator: TinyLlama | 
					
						
						|  | model_name: TinyLlama-1.1B-Chat v1.0 | 
					
						
						|  | model_type: Pythia | 
					
						
						|  | quantized_by: jartine | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | # TinyLlama-1.1B-Chat v1.0 w/ GGUF + llamafile | 
					
						
						|  |  | 
					
						
						|  | - Model creator: [TinyLlama](https://huggingface.co/TinyLlama) | 
					
						
						|  | - Original model: [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) | 
					
						
						|  |  | 
					
						
						|  | <!-- description start --> | 
					
						
						|  | ## Description | 
					
						
						|  |  | 
					
						
						|  | This repo contains both: | 
					
						
						|  |  | 
					
						
						|  | - Prebuilt llamafiles for each quantization format that can be executed to launch a web server or cli interface | 
					
						
						|  |  | 
					
						
						|  | - GGUF weights data files for each quantization format, which require either the [llamafile](https://github.com/mozilla-Ocho/llamafile) or [llama.cpp](https://github.com/ggerganov/llama.cpp) software to run | 
					
						
						|  |  | 
					
						
						|  | ## Prompt Template: ChatML | 
					
						
						|  |  | 
					
						
						|  | ``` | 
					
						
						|  | <|im_start|>system | 
					
						
						|  | {system_message}<|im_end|> | 
					
						
						|  | <|im_start|>user | 
					
						
						|  | {prompt}<|im_end|> | 
					
						
						|  | <|im_start|>assistant | 
					
						
						|  | ``` | 
					
						
						|  |  | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | # TinyLlama-1.1B | 
					
						
						|  | </div> | 
					
						
						|  |  | 
					
						
						|  | https://github.com/jzhang38/TinyLlama | 
					
						
						|  |  | 
					
						
						|  | The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. | 
					
						
						|  |  | 
					
						
						|  | #### This Model | 
					
						
						|  | This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. | 
					
						
						|  | We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4." | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | #### How to use | 
					
						
						|  | You will need the transformers>=4.34 | 
					
						
						|  | Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. | 
					
						
						|  |  | 
					
						
						|  | ```python | 
					
						
						|  | # Install transformers from source - only needed for versions <= v4.34 | 
					
						
						|  | # pip install git+https://github.com/huggingface/transformers.git | 
					
						
						|  | # pip install accelerate | 
					
						
						|  |  | 
					
						
						|  | import torch | 
					
						
						|  | from transformers import pipeline | 
					
						
						|  |  | 
					
						
						|  | pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto") | 
					
						
						|  |  | 
					
						
						|  | # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating | 
					
						
						|  | messages = [ | 
					
						
						|  | { | 
					
						
						|  | "role": "system", | 
					
						
						|  | "content": "You are a friendly chatbot who always responds in the style of a pirate", | 
					
						
						|  | }, | 
					
						
						|  | {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, | 
					
						
						|  | ] | 
					
						
						|  | prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | 
					
						
						|  | outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | 
					
						
						|  | print(outputs[0]["generated_text"]) | 
					
						
						|  | # <|system|> | 
					
						
						|  | # You are a friendly chatbot who always responds in the style of a pirate.</s> | 
					
						
						|  | # <|user|> | 
					
						
						|  | # How many helicopters can a human eat in one sitting?</s> | 
					
						
						|  | # <|assistant|> | 
					
						
						|  | # ... | 
					
						
						|  | ``` | 
					
						
						|  |  |