CoffeeChatAI / README.md
topboykrepta's picture
Update README.md
2215036 verified
metadata
language: en
tags:
  - coffeechat-ai
  - text-generation
  - gpt2
  - chatbot
  - side-project
license: apache-2.0
datasets:
  - openwebtext
model-index:
  - name: CoffeeChatAI
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: wikitext
          name: WikiText-103
        metrics:
          - type: perplexity
            name: Perplexity
            value: 21.1
co2_eq_emissions: 149200

☕ CoffeeChatAI

CoffeeChatAI is a lightweight GPT-2–based English language model.
It was developed and customized by Adrian Charles and his team Bluckhut as a side project, with the goal of making an accessible, branded chatbot-style AI for text generation.

CoffeeChatAI can be used to generate text for creative, academic, or entertainment purposes.


Model Details

  • Developed by: Adrian Charles & Team Bluckhut
  • Base model: (https://huggingface.co/topboykrepta/coffechatai)
  • Model type: Transformer-based causal language model
  • Language: English
  • Parameters: ~1.6M
  • License: Apache 2.0
  • Description:
    CoffeeChatAI is a branded and documented, designed to serve as the backbone for the CoffeeChat project.
    It is compact, fast, and intended for experimentation and educational side projects.

Intended Uses

Possible Applications

  • Writing assistance (autocompletion, idea generation, grammar help)
  • Creative text generation (stories, poetry, dialogue)
  • Entertainment (chatbots, games, roleplay scenarios)
  • Educational demos (exploring transformers, model compression, and fine-tuning)

⚠️ Limitations & Risks

  • May produce biased, offensive, or inaccurate content
  • Not suitable for tasks requiring factual correctness (e.g., news, medical, legal advice)
  • Small size = weaker performance compared to larger GPT-2/GPT-3 models

How to Use

You can load and use CoffeeChatAI directly with Hugging Face transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("topboykrepta/CoffeeChatAI")
model = AutoModelForCausalLM.from_pretrained("topboykrepta/CoffeeChatAI")

inputs = tokenizer("Hello, I am CoffeeChat AI,", return_tensors="pt")
outputs = model.generate(**inputs, max_length=30, num_return_sequences=2, do_sample=True)

for i, output in enumerate(outputs):
    print(f"Generated {i+1}: {tokenizer.decode(output, skip_special_tokens=True)}")

---

from transformers import pipeline

generator = pipeline("text-generation", model="topboykrepta/CoffeeChatAI")
print(generator("Hello, I am CoffeeChat AI,", max_length=30, num_return_sequences=2))

---

Or with the Hugging Face pipeline:

If you use this model, please cite:

@misc{CoffeeChatAI2025,
  author = {Adrian Charles and Team Bluckhut},
  title = {CoffeeChatAI: A Tiny Chat Applications},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/topboykrepta/CoffeeChatAI}},
}


---