Downloads

Model Card for ExplainIt-Phi-GGUF

This repository contains GGUF versions of a microsoft/phi-2 model fine-tuned using QLoRA to explain complex topics in simple, ELI5-style terms.

Model Overview

ExplainIt-Phi is a 2.7B parameter causal language model designed to be a clear and concise explainer. It was fine-tuned on a curated subset of the ELI5 dataset to excel at breaking down complex ideas.

Intended Uses & Limitations

This model is intended for direct use as a question-answering assistant. It is well-suited for generating content for educational materials, blogs, and chatbots. For best results, prompts should follow the format: Instruct: <your question>\nOutput:.

The model is not designed for creative writing or complex multi-turn conversations and may reflect the biases of its training data (the ELI5 subreddit). Always fact-check critical outputs.

How to Get Started

These GGUF models are designed for use with llama.cpp.

  1. Download a model file: Q4_K_M is recommended for general use.
  2. Run with llama.cpp:
    ./llama-cli -m ./ExplainIt-Phi-Q4_K_M.gguf -p "Instruct: Why is the sky blue?\nOutput:" -n 256
    

Available Files

This repository provides multiple quantization levels to suit different hardware needs.

File Name Quantization Use Case
ExplainIt-Phi-Q4_K_M.gguf Q4_K_M (4-bit) Default. Balanced quality and size.
ExplainIt-Phi-Q5_K_M.gguf Q5_K_M (5-bit) Higher quality for systems with more RAM.
ExplainIt-Phi-Q8_0.gguf Q8_0 (8-bit) Near-lossless, best for GPU execution.

Evaluation: Before vs. After

The fine-tuning process significantly improved the model's ability to provide simple, analogy-driven explanations.

Prompt: What is an API and what does it do, in simple terms?

Base Phi-2 Model (Before) Fine-Tuned ExplainIt-Phi (After)
"An API, or Application Programming Interface, is a set of rules and protocols that allows different software applications to communicate with each other. It acts as a bridge between two applications, allowing them to exchange data and functionality." "An API is like a waiter in a restaurant. You (an application) don't need to know how the kitchen works. You just give your order (a request) to the waiter (the API), and the waiter brings you your food (the data)."

Training Details

The model was fine-tuned using the QLoRA technique on a curated subset of the sentence-transformers/eli5 dataset. For a full breakdown of the training procedure, hyperparameters, and infrastructure, please see the project's GitHub repository.

Downloads last month
340
GGUF
Model size
2.78B params
Architecture
phi2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for simraann/ExplainIt-Phi-GGUF

Base model

microsoft/phi-2
Quantized
(41)
this model

Dataset used to train simraann/ExplainIt-Phi-GGUF