Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- meta-llama/Llama-3.1-8B
|
7 |
+
new_version: shiv1119/ollama-model-shiv
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
tags:
|
10 |
+
- ollama
|
11 |
+
- local-llm
|
12 |
+
- docker
|
13 |
+
- offline
|
14 |
+
- shiv
|
15 |
+
---
|
16 |
+
# 🦙 Shiv's Custom Local LLM (Ollama Ready)
|
17 |
+
|
18 |
+
Welcome to **`ollama-model-shiv`**, a custom-built local language model designed to run **completely offline** using [Ollama](https://ollama.com/). This repo packages everything — from `Modelfile` to model blobs — ready for Docker-based deployment and local inference.
|
19 |
+
|
20 |
+
---
|
21 |
+
## Link to model - https://huggingface.co/shiv1119/ollama-model-shiv
|
22 |
+
|
23 |
+
## 📁 Directory Structure
|
24 |
+
|
25 |
+
OllamaModelBuild/
|
26 |
+
├── model_test.py # Sample Python script to interact with the model
|
27 |
+
├── .gitattributes # Git LFS or text handling config
|
28 |
+
├── ollama_clean/
|
29 |
+
│ ├── docker-compose.yml # Docker config to run Ollama with this model
|
30 |
+
│ ├── Modelfile # Ollama build instructions
|
31 |
+
│ └── models/ # Model weights & metadata
|
32 |
+
│ ├── blobs/ # Binary blobs of the model
|
33 |
+
│ └── manifests/ # Manifest describing model structure
|
34 |
+
|
35 |
+
|
36 |
+
---
|
37 |
+
|
38 |
+
## 🚀 Features
|
39 |
+
|
40 |
+
- ✅ 100% Offline — No internet or API key needed
|
41 |
+
- 🐋 Docker-ready with `docker-compose.yml`
|
42 |
+
- ⚡ Works with [Ollama CLI](https://ollama.com)
|
43 |
+
- 🔁 Full model reproducibility via blobs and manifests
|
44 |
+
- 🧠 Based on `LLaMA2` / open-weight LLM architecture
|
45 |
+
|
46 |
+
---
|
47 |
+
|
48 |
+
## 🛠️ Getting Started
|
49 |
+
|
50 |
+
### 🔧 1. Install Prerequisites
|
51 |
+
|
52 |
+
- [Install Docker](https://docs.docker.com/get-docker/)
|
53 |
+
- (Optional) [Install Ollama CLI](https://ollama.com/download) — used during build
|
54 |
+
|
55 |
+
### 🐋 2. Run the Model Using Docker
|
56 |
+
|
57 |
+
In the root of the project:
|
58 |
+
|
59 |
+
```bash
|
60 |
+
cd ollama_clean
|
61 |
+
docker-compose up --build
|
62 |
+
```
|
63 |
+
|
64 |
+
This builds and runs the model container locally with your custom blobs and Modelfile.
|
65 |
+
🧪 Test the Model (Optional)
|
66 |
+
```bash
|
67 |
+
docker-compose exec ollama ollama run tinyllama "Hello"
|
68 |
+
```
|
69 |
+
Use the included Python script to test interaction:
|
70 |
+
|
71 |
+
python model_test.py
|
72 |
+
|
73 |
+
Customize it to query your local Ollama model running at http://localhost:11434.
|
74 |
+
🧰 Model Components
|
75 |
+
|
76 |
+
Modelfile: Blueprint for Ollama to build the model
|
77 |
+
|
78 |
+
blobs/: Raw model weights
|
79 |
+
|
80 |
+
manifests/: Metadata describing model format/version
|
81 |
+
|
82 |
+
docker-compose.yml: Encapsulates build/run config
|
83 |
+
|
84 |
+
🧠 About Ollama
|
85 |
+
|
86 |
+
Ollama makes it simple to run LLMs locally on your own machine — private, fast, and API-free.
|
87 |
+
|
88 |
+
📦 Repo Purpose
|
89 |
+
|
90 |
+
This repository was created to:
|
91 |
+
|
92 |
+
Host a working local LLM solution
|
93 |
+
|
94 |
+
Enable offline inference
|
95 |
+
|
96 |
+
Serve as a template for packaging custom models with Ollama
|
97 |
+
|
98 |
+
📜 License
|
99 |
+
|
100 |
+
This repo is for educational/research purposes only. Please ensure you comply with the license of any base models used (e.g., LLaMA2, Mistral, etc.).
|
101 |
+
🙌 Credits
|
102 |
+
|
103 |
+
Crafted with ❤️ by Shiv Nandan Verma
|