GPT-OSS PlantUML Generation Model V1

Model Description

GPT-OSS PlantUML Generation Model V1 is a fine-tuned language model specialised in generating PlantUML diagrams from natural language descriptions. The model excels at creating complex conceptual diagrams that map philosophical, mathematical, and scientific concepts across different domains and historical periods.

Model Details

  • Base Model: GPT-OSS architecture
  • Model Type: Causal Language Model
  • Language(s): English
  • License: Apache 2.0
  • Fine-tuned from: openai/gpt-oss-20b (abliterated by huihui.ai)

Training Details

Training Data

The model was fine-tuned on the PumlGenV1 dataset of natural language descriptions paired with corresponding PlantUML diagram code.

Training Configuration

  • Optimiser: AdamW 8-bit
  • Learning Rate Schedule: LoRA (Low-Rank Adaptation)
    • LoRA Rank: 1000
    • LoRA Alpha: 2000
  • Training Epochs: 3
  • Batch Size: 1
  • Gradient Accumulation Steps: 16
  • Effective Batch Size: 16

Training Infrastructure

  • Fine-tuning approach: Parameter-efficient fine-tuning with LoRA
  • Memory optimisation: 8-bit AdamW optimiser

Intended Use

Primary Use Cases

  • Academic Research: Visualising complex philosophical and scientific concepts
  • Educational Content: Creating diagrams for teaching abstract ideas
  • Documentation: Generating visual representations of conceptual frameworks
  • Knowledge Mapping: Illustrating relationships between ideas across disciplines

Example Usage

Input Prompt:

Map the evolution of the concept of 'nothing' from Parmenides through Buddhist śūnyatā to quantum vacuum fluctuations, showing philosophical, mathematical, and physical interpretations

Expected Output:

image/png

Usage Examples

Basic Usage

Python

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("chrisrutherford/gpt-oss-pumlGenV1")
model = AutoModelForCausalLM.from_pretrained("chrisrutherford/gpt-oss-pumlGenV1")

prompt = "Map the evolution of the concept of 'nothing' from Parmenides through Buddhist śūnyatā to quantum vacuum fluctuations, showing philosophical, mathematical, and physical interpretations"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=1000)
puml_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
Downloads last month
10
Safetensors
Model size
20.9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chrisrutherford/gpt-oss-pumlGenV1

Base model

openai/gpt-oss-20b
Finetuned
(193)
this model