pgarbacki's picture
Update README.md
ffab848 verified
|
raw
history blame
3.72 kB
metadata
license: apache-2.0
tags:
  - function-calling

Fireworks Function Calling (FireFunction) Model V2

image/png

FireFunction is a state-of-the-art function calling model with a commercially viable license. Key info and highlights:

🐾 Successor of the FireFunction model

📏 Signifficant quality improvements over FireFunction v1 across the broad range of metrics

🔆 Support of parallel function calling (unlike FireFunction v1) and good instruction following

💡 Hosted on the Fireworks platform

Resources

Intended Use and Limitations

Supported usecases

The model was tuned to perfom well on a range of usecases including:

  • general instruction following
  • multi-turn chat mixing vanilla messages with function calls
  • single- and parallel function calling
  • up to 20 function specs supported at once
  • structured information extraction

Out-of-Scope Use

The model was not optimized for the following use cases:

  • 100+ function specs
  • nested function calling

Example Usage

See documentation for more detail.

from transformers import AutoModelForCausalLM, AutoTokenizer
import json

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v2")

function_spec = [
    {
        "name": "get_stock_price",
        "description": "Get the current stock price",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string",
                    "description": "The stock symbol, e.g. AAPL, GOOG"
                }
            },
            "required": [
                "symbol"
            ]
        }
    },
    {
        "name": "check_word_anagram",
        "description": "Check if two words are anagrams of each other",
        "parameters": {
            "type": "object",
            "properties": {
                "word1": {
                    "type": "string",
                    "description": "The first word"
                },
                "word2": {
                    "type": "string",
                    "description": "The second word"
                }
            },
            "required": [
                "word1",
                "word2"
            ]
        }
    }
]
functions = json.dumps(function_spec, indent=4)

messages = [
    {'role': 'functions', 'content': functions},
    {'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'},
    {'role': 'user', 'content': 'Hi, can you tell me the current stock price of google and netflix?'}
]

model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

generated_ids = model.generate(model_inputs, max_new_tokens=128)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

Demo App

Check our easy-to-extend demo chat app with function calling capabilities built on Firefunction model.