Function calling
A growing number of chat models, like OpenAI, Gemini, etc., have a function-calling API that lets you describe functions and their arguments, and have the model return a JSON object with a function to invoke and the inputs to that function. Function-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
LangChain comes with a number of utilities to make function-calling easy. Namely, it comes with:
- simple syntax for binding functions to models
- converters for formatting various types of objects to the expected function schemas
- output parsers for extracting the function invocations from API responses
- chains for getting structured outputs from a model, built on top of function calling
Weβll focus here on the first two points. For a detailed guide on output parsing check out the OpenAI Tools output parsers and to see the structured output chains check out the Structured output guide.
Before getting started make sure you have langchain-core
installed.
%pip install -qU langchain-core langchain-openai
import getpass
import os
Binding functionsβ
A number of models implement helper methods that will take care of formatting and binding different function-like objects to the model. Letβs take a look at how we might take the following Pydantic function schema and get different models to invoke it:
from langchain_core.pydantic_v1 import BaseModel, Field
# Note that the docstrings here are crucial, as they will be passed along
# to the model along with the class name.
class Multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- TogetherAI
Install dependencies
pip install -qU langchain-openai
Set environment variables
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
Install dependencies
pip install -qU langchain-anthropic
Set environment variables
import getpass
import os
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass()
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-sonnet-20240229")
Install dependencies
pip install -qU langchain-fireworks
Set environment variables
import getpass
import os
os.environ["FIREWORKS_API_KEY"] = getpass.getpass()
from langchain_fireworks import ChatFireworks
llm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
Install dependencies
pip install -qU langchain-mistralai
Set environment variables
import getpass
import os
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
from langchain_mistralai import ChatMistralAI
llm = ChatMistralAI(model="mistral-large-latest")
Install dependencies
pip install -qU langchain-google-genai
Set environment variables
import getpass
import os
os.environ["GOOGLE_API_KEY"] = getpass.getpass()
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
Install dependencies
pip install -qU langchain-together
Set environment variables
import getpass
import os
os.environ["TOGETHER_API_KEY"] = getpass.getpass()
from langchain_together import Together
llm = Together(
base_url="https://api.together.xyz/v1",
api_key=os.environ["TOGETHER_API_KEY"],
model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
We can use the bind_tools()
method to handle converting
Multiply
to a "function" and binding it to the model (i.e.,
passing it in each time the model is invoked).
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Q8ZQ97Qrj5zalugSkYMGV1Uo', 'function': {'arguments': '{"a":3,"b":12}', 'name': 'Multiply'}, 'type': 'function'}]})
We can add a tool parser to extract the tool calls from the generated message to JSON:
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
Or back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
[Multiply(a=3, b=12)]
If our model isnβt using the tool, as is the case here, we can force
tool usage by specifying tool_choice="any"
or by specifying the name
of the specific tool we want used:
llm_with_tools = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_qIP2bJugb67LGvc6Zhwkvfqc', 'type': 'function', 'function': {'name': 'Multiply', 'arguments': '{"a": 3, "b": 12}'}}]})
If we wanted to force that a tool is used (and that it is used only
once), we can set the tool_choice
argument to the name of the tool:
llm_with_multiply = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_multiply.invoke(
"make up some numbers if you really want but I'm not forcing you"
)
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f3DApOzb60iYjTfOhVFhDRMI', 'function': {'arguments': '{"a":5,"b":10}', 'name': 'Multiply'}, 'type': 'function'}]})
For more see the ChatOpenAI API reference.
Defining functions schemasβ
In case you need to access function schemas directly, LangChain has a built-in converter that can turn Python functions, Pydantic classes, and LangChain Tools into the OpenAI format JSON schema:
Python functionβ
import json
from langchain_core.utils.function_calling import convert_to_openai_tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers together.
Args:
a: First integer
b: Second integer
"""
return a * b
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "integer",
"description": "First integer"
},
"b": {
"type": "integer",
"description": "Second integer"
}
},
"required": [
"a",
"b"
]
}
}
}
Pydantic classβ
from langchain_core.pydantic_v1 import BaseModel, Field
class multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
LangChain Toolβ
from typing import Any, Type
from langchain_core.tools import BaseTool
class MultiplySchema(BaseModel):
"""Multiply tool schema."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
class Multiply(BaseTool):
args_schema: Type[BaseModel] = MultiplySchema
name: str = "multiply"
description: str = "Multiply two integers together."
def _run(self, a: int, b: int, **kwargs: Any) -> Any:
return a * b
# Note: we're passing in a Multiply object not the class itself.
print(json.dumps(convert_to_openai_tool(Multiply()), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
Next stepsβ
- Output parsing: See OpenAI Tools output parsers and OpenAI Functions output parsers to learn about extracting the function calling API responses into various formats.
- Structured output chains: Some models have constructors that handle creating a structured output chain for you.
- Tool use: See how to construct chains and agents that actually call the invoked tools in these guides.