codeShare commited on
Commit
e8ebd72
·
verified ·
1 Parent(s): 86e1820

Upload goonsai_civitprompt_LLM.ipynb

Browse files
Files changed (1) hide show
  1. goonsai_civitprompt_LLM.ipynb +1 -1
goonsai_civitprompt_LLM.ipynb CHANGED
@@ -1 +1 @@
1
- {"cells":[{"cell_type":"code","source":["#@title Install Dependencies and Set Up GPU\n","!pip install llama-cpp-python huggingface_hub --quiet\n","print(\"Dependencies installed.\")\n","\n","#@title Select a Model { run: \"auto\" }\n","model_name = \"qwen2.5-1.5B-civitai-nsfw-v1\" #@param [\"gemma3-1B-goonsai-nsfw-100k\", \"qwen2.5-1.5B-civitai-nsfw-v1\", \"qwen2.5-3B-goonsai-nsfw-100k\", \"qwen3-1.7B-civitai-nsfw-v1\"]\n","\n","# Download the selected model\n","from huggingface_hub import hf_hub_download\n","model_filename = f\"{model_name}/{model_name}-BF16.gguf\"\n","model_path = hf_hub_download(\n"," repo_id=\"goonsai-com/civitaiprompts\",\n"," filename=model_filename,\n"," local_dir=\"./models\"\n",")\n","print(f\"Downloaded {model_name} to: {model_path}\")\n","\n","#@title Load and Run Model on T4 GPU\n","from llama_cpp import Llama\n","\n","# Load the model\n","llm = Llama(\n"," model_path=model_path,\n"," n_ctx=2048, # Context length\n"," n_batch=512, # Batch size\n"," n_gpu_layers=-1, # Offload all layers to T4 GPU\n"," verbose=False\n",")\n","\n"],"metadata":{"id":"J7itQqSrK1TG"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["\n","#@markdown Define a simple prompt\n","input_prompt = \"a woman in a red bikini on the beach\" #@param {type:\"string\"}\n","#@markdown ...using settings:\n","system_message = \"make a prompt from this\" #@param {type:\"string\"}\n","full_prompt = f\"{system_message}:\\n\\n{input_prompt}\"\n","temperature = 0.7 #@param {type:\"slider\", min:0.1, max:1.5, step:0.01}\n","top_p = 0.9 #@param {type:\"slider\", min:0.7, max:0.9, step:0.01}\n","\n","# Generate a detailed prompt\n","output = llm(\n"," prompt=full_prompt, # Use full_prompt with system message\n"," max_tokens=512, # Adjust based on desired output length\n"," temperature=temperature, # Controls randomness\n"," top_p=top_p, # Nucleus sampling\n"," stop=[\"\\n\"] # Stop at newline for cleaner output\n",")\n","\n","# Print the generated prompt\n","print(\"Generated Prompt:\\n---------\\n\")\n","print(output[\"choices\"][0][\"text\"].replace(',', ',\\n'))\n","\n","\n","\n"],"metadata":{"id":"aq10GOl6KoxJ"},"execution_count":null,"outputs":[]}],"metadata":{"accelerator":"GPU","colab":{"gpuType":"T4","provenance":[],"authorship_tag":"ABX9TyMpt1C32fzmsJACKhoEMnIO"},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"}},"nbformat":4,"nbformat_minor":0}
 
1
+ {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"provenance":[{"file_id":"https://github.com/5aharsh/collama/blob/main/Ollama_Setup.ipynb","timestamp":1754040362502}],"gpuType":"T4"},"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"},"accelerator":"GPU"},"cells":[{"cell_type":"markdown","source":["# Run Ollama in Colab\n","---\n","\n","[![5aharsh/collama](https://raw.githubusercontent.com/5aharsh/collama/main/assets/banner.png)](https://github.com/5aharsh/collama)\n","\n","This is an example notebook which demonstrates how to run Ollama inside a Colab instance. With this you can run pretty much any small to medium sized models offerred by Ollama for free.\n","\n","For the list of available models check [models being offerred by Ollama](https://ollama.com/library).\n","\n","\n","## Before you proceed\n","---\n","\n","Since by default the runtime type of Colab instance is CPU based, in order to use LLM models make sure to change your runtime type to T4 GPU (or better if you're a paid Colab user). This can be done by going to **Runtime > Change runtime type**.\n","\n","While running your script be mindful of the resources you're using. This can be tracked at **Runtime > View resources**.\n","\n","## Running the notebook\n","---\n","\n","After configuring the runtime just run it with **Runtime > Run all**. And you can start tinkering around. This example uses [Llama 3.2](https://ollama.com/library/llama3.2) to generate a response from a prompted question using [LangChain Ollama Integration](https://python.langchain.com/docs/integrations/chat/ollama/)."],"metadata":{"id":"zyGk-87qnbWE"}},{"cell_type":"markdown","source":["## Installing Dependencies\n","---\n","\n","1. `pciutils` is required by Ollama to detect the GPU type.\n","2. Installation of Ollama in the runtime instance will be taken care by `curl -fsSL https://ollama.com/install.sh | sh`\n","\n","\n"],"metadata":{"id":"B1S1YL6EnYBB"}},{"cell_type":"code","source":["!sudo apt update\n","!sudo apt install -y pciutils\n","!curl -fsSL https://ollama.com/install.sh | sh"],"metadata":{"id":"YlVK9iG4AD5L"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["## Running Ollama\n","---\n","\n","In order to use Ollama it needs to run as a service in background parallel to your scripts. Becasue Jupyter Notebooks is built to run code blocks in sequence this make it difficult to run two blocks at the same time. As a workaround we will create a service using subprocess in Python so it doesn't block any cell from running.\n","\n","Service can be started by command `ollama serve`.\n","\n","`time.sleep(5)` adds some delay to get the Ollama service up before downloading the model."],"metadata":{"id":"fGEJwjTPoKWH"}},{"cell_type":"code","source":["import threading\n","import subprocess\n","import time\n","\n","def run_ollama_serve():\n"," subprocess.Popen([\"ollama\", \"serve\"])\n","\n","thread = threading.Thread(target=run_ollama_serve)\n","thread.start()\n","time.sleep(5)"],"metadata":{"id":"Jh5CBAFxBYAC"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["## Pulling Model\n","---\n","\n","Download the LLM model using `ollama pull llama3.2`.\n","\n","For other models check https://ollama.com/library"],"metadata":{"id":"WcBLqZfyoHg4"}},{"cell_type":"code","source":["\n","model_url = 'goonsai/qwen2.5-3B-goonsai-nsfw-100k' # @param {type:'string'}\n","!ollama pull {model_url}\n","\n"],"metadata":{"id":"o2ghppmRDFny"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["!pip install langchain-ollama"],"metadata":{"id":"MbrT39oil6tK"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["from langchain_core.prompts import ChatPromptTemplate\n","from langchain_ollama.llms import OllamaLLM\n","from IPython.display import Markdown\n","\n","\n","prompt_input = 'egyptian_mythology' # @param {type:'string'}\n","template = \"\"\"Question: {question}\n","\n","Answer: Let's think step by step.\"\"\"\n","\n","prompt = ChatPromptTemplate.from_template(template)\n","\n","model = OllamaLLM(model=\"goonsai/qwen2.5-3B-goonsai-nsfw-100k\")\n","\n","chain = prompt | model\n","\n","display(chain.invoke({'question': f'{prompt_input}'}))"],"metadata":{"id":"mUrk_3pL9LX7"},"execution_count":null,"outputs":[]}]}