Text Generation
Transformers
PyTorch
code
gpt2
swift
mobile
generation
text-generation-inference
How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mvasiliniuc/iva-codeint-swift-small"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "mvasiliniuc/iva-codeint-swift-small",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/mvasiliniuc/iva-codeint-swift-small
Quick Links

iva-codeint-swift-small GPT-2 is (small version - 239.4M parameters) trained from scratch to obtain results in the text-to-code task tailored for Swift language used in native mobile development (iOS).

Usage

from transformers import pipeline

pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-swift-small")
outputs = pipe("func triggerNSNotification")

Inference

API_URL = "https://api-inference.huggingface.co/models/mvasiliniuc/iva-codeint-swift-small"
headers = {"Authorization": "Bearer <key>"}
def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

output = query({
"inputs": """
/* 
A function that gets the current device operating system.
*/
"""
})
pprint.pprint(output, compact=True)

Training

Config Value
seq length 1024
weight decay 0.1
learning rate 0.0005
max eval steps -1
shuffle buffer 10000
max train steps 150000
mixed precision fp16
num warmup steps 2000
train batch size 5
valid batch size 5
lr scheduler type cosine
save checkpoint steps 15000
gradient checkpointing false
gradient accumulation steps 1

Resources

Resources used for research:

Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train mvasiliniuc/iva-codeint-swift-small

Free AI Image Generator No sign-up. Instant results. Open Now