Update README.md
Browse files
README.md
CHANGED
|
@@ -1,143 +1,119 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
datasets:
|
| 4 |
-
|
| 5 |
language:
|
| 6 |
-
|
| 7 |
base_model:
|
| 8 |
-
|
| 9 |
tags:
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
- mind-extension
|
| 24 |
-
- philosophy
|
| 25 |
-
- sharegpt
|
| 26 |
-
- alignment
|
| 27 |
-
- language-model
|
| 28 |
-
- multi-turn
|
| 29 |
-
- q-and-a
|
| 30 |
-
- shareable
|
| 31 |
-
- teachable
|
| 32 |
-
- human-feedback
|
| 33 |
-
- deep-learning
|
| 34 |
-
- ai-model
|
| 35 |
-
- research
|
| 36 |
-
- synthetic-data
|
| 37 |
---
|
| 38 |
-
# 🧠 DeepQ
|
| 39 |
-
|
| 40 |
-
[](LICENSE)
|
| 41 |
-
[](https://huggingface.co/StableChatAI/DeepQ)
|
| 42 |
-
[](https://huggingface.co/datasets/kulia-moon/DeepRethink)
|
| 43 |
-
[](https://pypi.org/project/transformers/)
|
| 44 |
-
[](https://pypi.org/project/datasets/)
|
| 45 |
-
[](https://pypi.org/project/transformers/)
|
| 46 |
-
[](https://huggingface.co/spaces)
|
| 47 |
-
[]()
|
| 48 |
-
[]()
|
| 49 |
-
[]()
|
| 50 |
-
[]()
|
| 51 |
-
[]()
|
| 52 |
-
[]()
|
| 53 |
-
[]()
|
| 54 |
-
[]()
|
| 55 |
-
[]()
|
| 56 |
-
[]()
|
| 57 |
-
[]()
|
| 58 |
-
[]()
|
| 59 |
-
[]()
|
| 60 |
|
| 61 |
-
---
|
| 62 |
-
|
| 63 |
-
## 🔍 What is DeepQ?
|
| 64 |
|
| 65 |
-
|
| 66 |
|
| 67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
---
|
| 70 |
|
| 71 |
-
##
|
| 72 |
-
|
| 73 |
-
- 🤔 **Deep reasoning** before answering
|
| 74 |
-
- 🧩 Trained with ShareGPT-style conversations + DeepRethink Q&A
|
| 75 |
-
- 🧠 Designed for philosophical, logical, and emotional introspection
|
| 76 |
-
- 🔄 Multi-turn dialogue support
|
| 77 |
-
- ⚡️ Lightweight GPT-2 base for fast inference
|
| 78 |
-
- 🧪 Works on CPU + GPU
|
| 79 |
-
- 🔌 Hugging Face Transformers compatible
|
| 80 |
-
- 🧬 Great base for alignment research or dialog tuning
|
| 81 |
-
|
| 82 |
-
---
|
| 83 |
|
| 84 |
-
|
| 85 |
|
| 86 |
-
|
| 87 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 88 |
|
| 89 |
-
|
| 90 |
-
model = AutoModelForCausalLM.from_pretrained("kulia-moon/DeepQ")
|
| 91 |
|
| 92 |
-
|
| 93 |
-
inputs = tokenizer(input_text, return_tensors="pt")
|
| 94 |
-
output = model.generate(**inputs, max_new_tokens=100)
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
---
|
| 100 |
|
| 101 |
-
##
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
---
|
| 107 |
|
| 108 |
-
## 🧪
|
| 109 |
|
| 110 |
-
*
|
| 111 |
-
*
|
| 112 |
-
*
|
| 113 |
-
*
|
| 114 |
-
*
|
|
|
|
| 115 |
|
| 116 |
---
|
| 117 |
|
| 118 |
-
##
|
| 119 |
|
| 120 |
-
|
|
|
|
|
|
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
author = {Kulia Moon},
|
| 125 |
-
title = {DeepQ: A Deep Thinking Conversational Model},
|
| 126 |
-
year = {2025},
|
| 127 |
-
howpublished = {\url{https://huggingface.co/kulia-moon/DeepRethink}},
|
| 128 |
-
}
|
| 129 |
```
|
| 130 |
|
| 131 |
---
|
| 132 |
|
| 133 |
-
##
|
| 134 |
|
| 135 |
-
*
|
| 136 |
-
*
|
| 137 |
-
*
|
| 138 |
-
*
|
|
|
|
| 139 |
|
| 140 |
---
|
| 141 |
|
| 142 |
-
|
| 143 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
datasets:
|
| 4 |
+
- kulia-moon/DeepRethink
|
| 5 |
language:
|
| 6 |
+
- en
|
| 7 |
base_model:
|
| 8 |
+
- openai-community/gpt2-medium
|
| 9 |
tags:
|
| 10 |
+
- DeepQ
|
| 11 |
+
- DeepRethink integrated
|
| 12 |
+
- QFamily
|
| 13 |
+
- Hugging Face
|
| 14 |
+
- NLP
|
| 15 |
+
- AI Research
|
| 16 |
+
- Reasoning
|
| 17 |
+
- Cognitive Simulation
|
| 18 |
+
- Transformers
|
| 19 |
+
- StableChatAI
|
| 20 |
+
- MultiVendor Deployments
|
| 21 |
+
- Region-Based Scaling
|
| 22 |
+
- Production Ready
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
# 🌌 DeepQ
|
| 27 |
|
| 28 |
+

|
| 29 |
+

|
| 30 |
+

|
| 31 |
+

|
| 32 |
+

|
| 33 |
+

|
| 34 |
+

|
| 35 |
+

|
| 36 |
+

|
| 37 |
+

|
| 38 |
|
| 39 |
---
|
| 40 |
|
| 41 |
+
## 🤯 What is DeepQ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
**DeepQ** is an advanced deep reasoning language model created through the synergy between the **QFamily** architecture and the cutting-edge **DeepRethink** dataset. Designed to push the limits of context-rich inference, explanation generation, and reflective response modeling, DeepQ is the next evolution in human-like thought simulation.
|
| 44 |
|
| 45 |
+
It inherits the base architecture of `gpt2-medium` and is fine-tuned with the **DeepRethink** dataset (`kulia-moon/DeepRethink`), which focuses on multi-perspective reasoning, contradictory thought, question decomposition, and hypothetical situations — all geared towards cultivating a machine that *rethinks before responding*.
|
|
|
|
| 46 |
|
| 47 |
+
---
|
|
|
|
| 48 |
|
| 49 |
+
## 📦 Key Features
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
| Feature | Description |
|
| 52 |
+
| ----------------------- | ----------------------------------------------------------------------- |
|
| 53 |
+
| 🧠 DeepRethink Data | Trained on thousands of synthetic and real thought chains |
|
| 54 |
+
| 🧬 Cognitive Patterns | Simulates re-evaluation and critical thinking behaviors |
|
| 55 |
+
| 🏗 GPT2 Foundation | Built on `openai-community/gpt2-medium` |
|
| 56 |
+
| 🌎 Regional Scaling | Deploys across regions for low-latency use |
|
| 57 |
+
| 💬 Reflective Responses | Handles contradiction, dilemma, and uncertainty contexts |
|
| 58 |
+
| 🛠 Use Case Ready | Research, chatbots, simulators, tutoring systems, AI ethics discussions |
|
| 59 |
+
| ☁️ Multi-vendor Support | Optimized for deployment on Hugging Face, Vercel, AWS, GCP, Azure |
|
| 60 |
+
| 🚀 Streaming Compatible | Full support for SSE and WebSocket-based AI pipelines |
|
| 61 |
+
| 📚 Licensing | MIT license, open and production-friendly |
|
| 62 |
|
| 63 |
---
|
| 64 |
|
| 65 |
+
## 🚀 Deployments
|
| 66 |
+
|
| 67 |
+
| Region | Vendor | Endpoint | Deployment Badge |
|
| 68 |
+
| ---------------------- | ------------ | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
|
| 69 |
+
| US East (VA) | Hugging Face | [US East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 70 |
+
| EU West (Ireland) | Hugging Face | [EU West](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 71 |
+
| Asia (Singapore) | Hugging Face | [Asia](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 72 |
+
| Global CDN | Vercel | [Vercel CDN](https://deepq.vercel.app) |  |
|
| 73 |
+
| US West (Oregon) | AWS | [AWS](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 74 |
+
| EU Central (Frankfurt) | AWS | [AWS EU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 75 |
+
| Tokyo | GCP | [GCP JP](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 76 |
+
| Sydney | Azure | [Azure AU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 77 |
+
| São Paulo | Hugging Face | [Brazil](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 78 |
+
| India (Mumbai) | Hugging Face | [India](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 79 |
+
| Canada (Montreal) | Hugging Face | [Canada](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 80 |
+
| Africa (Cape Town) | Hugging Face | [Africa](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 81 |
+
| Middle East (Bahrain) | Hugging Face | [Middle East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
| 82 |
|
| 83 |
---
|
| 84 |
|
| 85 |
+
## 🧪 Use Cases
|
| 86 |
|
| 87 |
+
* **AI Research**: Foundation for studying multi-layered logic simulation and AI explainability
|
| 88 |
+
* **Reflective Chatbots**: For applications needing nuanced and multi-turn understanding
|
| 89 |
+
* **Tutoring Systems**: Where feedback loops and re-evaluation are essential
|
| 90 |
+
* **Debate Engines**: Model holds internal opposition to simulate conflict and resolution
|
| 91 |
+
* **Philosophical AI**: Explore cognitive dissonance, ethics, duality, and hypothetical constructs
|
| 92 |
+
* **Medical/Ethical Simulators**: With dilemma-aware prompts and double-sided scenarios
|
| 93 |
|
| 94 |
---
|
| 95 |
|
| 96 |
+
## 🧭 Quickstart
|
| 97 |
|
| 98 |
+
```bash
|
| 99 |
+
pip install transformers
|
| 100 |
+
from transformers import pipeline
|
| 101 |
|
| 102 |
+
qa = pipeline("text-generation", model="StableChatAI/DeepQ")
|
| 103 |
+
qa("Why do people sometimes change their beliefs?")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
```
|
| 105 |
|
| 106 |
---
|
| 107 |
|
| 108 |
+
## 🌐 Links
|
| 109 |
|
| 110 |
+
* **Model Card**: [https://huggingface.co/StableChatAI/DeepQ](https://huggingface.co/StableChatAI/DeepQ)
|
| 111 |
+
* **Dataset**: [https://huggingface.co/datasets/kulia-moon/DeepRethink](https://huggingface.co/datasets/kulia-moon/DeepRethink)
|
| 112 |
+
* **Deploy Model**: [https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ)
|
| 113 |
+
* **GitHub**: [https://github.com/StableChatAI/DeepQ](https://github.com/StableChatAI/DeepQ)
|
| 114 |
+
* **License**: MIT
|
| 115 |
|
| 116 |
---
|
| 117 |
|
| 118 |
+
> *“DeepQ isn't just another language model — it's a new frontier of thought.”*
|
| 119 |
+
> — QFamily Lab 🧪
|