OpenAI calls agent skills “function calling.” The concept is identical to Claude’s tool_use — the model decides it needs to call a function, you execute it, you return the result, the model responds. The syntax is just different.
If you’ve already read the Claude tools guide, this will feel very familiar. If not, start with what agent skills are first.
Setup
npm install openai
export OPENAI_API_KEY="your-key-here"
Step 1 — Define your function (tool)
OpenAI uses tools with type: "function". The structure wraps the function definition in an extra layer compared to Claude:
const tools = [
{
type: "function",
function: {
name: "get_weather",
description:
"Get current weather for a city. Use this when the user asks about " +
"weather, temperature, rain, or what to wear.",
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "The city name, e.g. 'Mumbai' or 'New York'"
}
},
required: ["city"]
}
}
}
];
Note: Claude uses input_schema, OpenAI uses parameters — same JSON Schema format, different key name.
Step 2 — Send the message
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.chat.completions.create({
model: "gpt-4o",
tools: tools,
messages: [
{ role: "user", content: "What's the weather in Mumbai?" }
]
});
const message = response.choices[0].message;
console.log(message.finish_reason); // "tool_calls" if a function was requested
console.log(message.tool_calls); // array of function calls
If gpt-4o wants to call get_weather, the response looks like:
{
"finish_reason": "tool_calls",
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"city\": \"Mumbai\"}"
}
}
]
}
}
One important difference from Claude: arguments is a JSON string, not an object. You need to JSON.parse() it before use.
Step 3 — Execute the function
async function get_weather({ city }) {
const geo = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
).then(r => r.json());
if (!geo.results?.length) return { error: "City not found" };
const { latitude, longitude, name, country } = geo.results[0];
const weather = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true&hourly=relativehumidity_2m`
).then(r => r.json());
const codes = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
61: "Light rain", 63: "Moderate rain", 65: "Heavy rain", 95: "Thunderstorm"
};
return {
city: `${name}, ${country}`,
temperature: `${weather.current_weather.temperature}°C`,
condition: codes[weather.current_weather.weathercode] ?? "Unknown",
humidity: `${weather.hourly.relativehumidity_2m[0]}%`
};
}
// Parse arguments (it's a string, not an object!)
const toolCall = message.tool_calls[0];
const args = JSON.parse(toolCall.function.arguments);
const result = await get_weather(args);
Step 4 — Return the result and get the final answer
Unlike Claude, where you return results with type: "tool_result", OpenAI uses role: "tool":
const finalResponse = await client.chat.completions.create({
model: "gpt-4o",
tools: tools,
messages: [
// Original user message
{ role: "user", content: "What's the weather in Mumbai?" },
// Model's tool_call message (include exactly as returned)
message,
// Your function result
{
role: "tool",
tool_call_id: toolCall.id, // must match the id from the tool_call
content: JSON.stringify(result) // return as a string
}
]
});
console.log(finalResponse.choices[0].message.content);
// "Mumbai is currently partly cloudy at 31°C with 78% humidity..."
Full working example
// weather-agent-openai.js
import OpenAI from "openai";
const client = new OpenAI();
const tools = [
{
type: "function",
function: {
name: "get_weather",
description:
"Get current weather for a city. Use when the user asks about " +
"weather, temperature, rain, or what to wear.",
parameters: {
type: "object",
properties: {
city: { type: "string", description: "City name" }
},
required: ["city"]
}
}
}
];
async function get_weather({ city }) {
const geo = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
).then(r => r.json());
if (!geo.results?.length) return { error: "City not found" };
const { latitude, longitude, name, country } = geo.results[0];
const weather = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true&hourly=relativehumidity_2m`
).then(r => r.json());
const codes = {
0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
61: "Light rain", 63: "Moderate rain", 65: "Heavy rain", 95: "Thunderstorm"
};
return {
city: `${name}, ${country}`,
temperature: `${weather.current_weather.temperature}°C`,
condition: codes[weather.current_weather.weathercode] ?? "Unknown",
humidity: `${weather.hourly.relativehumidity_2m[0]}%`
};
}
const toolFunctions = { get_weather };
async function chat(userMessage) {
const messages = [{ role: "user", content: userMessage }];
let response = await client.chat.completions.create({
model: "gpt-4o",
tools,
messages
});
let assistantMessage = response.choices[0].message;
// Loop until no more tool calls
while (assistantMessage.finish_reason === "tool_calls") {
// Add the assistant's tool_call message to history
messages.push(assistantMessage);
// Execute each tool call (there could be multiple in parallel)
for (const toolCall of assistantMessage.tool_calls) {
const fn = toolFunctions[toolCall.function.name];
const args = JSON.parse(toolCall.function.arguments);
const result = fn ? await fn(args) : { error: "Unknown function" };
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content: JSON.stringify(result)
});
}
// Call the model again with the results
response = await client.chat.completions.create({
model: "gpt-4o",
tools,
messages
});
assistantMessage = response.choices[0].message;
}
return assistantMessage.content;
}
const answer = await chat("Is it going to rain in Ahmedabad today?");
console.log(answer);
Run:
node weather-agent-openai.js
Claude vs OpenAI — side-by-side
| Claude API | OpenAI API | |
|---|---|---|
| Tool definition key | tools: [{ name, description, input_schema }] | tools: [{ type: "function", function: { name, description, parameters } }] |
| Schema field | input_schema | parameters |
| Stop signal | stop_reason: "tool_use" | finish_reason: "tool_calls" |
| Tool calls in response | content: [{ type: "tool_use", id, name, input }] | message.tool_calls: [{ id, type, function: { name, arguments } }] |
| Arguments format | Object (parsed already) | JSON string (you must parse) |
| Result message role | "user" with type: "tool_result" | "tool" with tool_call_id |
| Multiple tool calls | One per response | Multiple in one response (parallel) |
Parallel tool calls
OpenAI can request multiple tool calls in a single response — useful for fetching data from multiple sources at once. Claude typically calls one tool at a time.
// User: "Compare weather in Mumbai and Delhi"
// gpt-4o might return:
assistantMessage.tool_calls = [
{ id: "call_1", function: { name: "get_weather", arguments: '{"city":"Mumbai"}' } },
{ id: "call_2", function: { name: "get_weather", arguments: '{"city":"Delhi"}' } }
];
The for loop in the full example above handles this automatically — it executes all tool calls and adds all results before calling the model again.
Controlling when the model uses tools
By default, the model decides whether to call a tool. You can override this:
// Force the model to call a specific tool
tool_choice: { type: "function", function: { name: "get_weather" } }
// Never call any tool (text only)
tool_choice: "none"
// Let the model decide (default)
tool_choice: "auto"
// Call a tool if appropriate, but allow text-only response too
tool_choice: "auto"
Claude has equivalent behaviour with tool_choice — same concept, same options.
Common mistakes
Forgetting to parse arguments
// Wrong
const args = toolCall.function.arguments; // this is a string
// Right
const args = JSON.parse(toolCall.function.arguments); // now it's an object
Not including the assistant message in history
When sending tool results back, you must include the assistant’s tool_calls message in the history — otherwise the model doesn’t know what result you’re responding to.
Checking finish_reason before the loop
The model might call multiple tools in sequence. Use a while loop that keeps going until finish_reason is "stop" (not "tool_calls").
What’s next
Understand the concept: What Are Agent Skills? AI Tools Explained Simply
Handle failures gracefully: Handling Errors in Agent Skills: Retries and Fallbacks
Test your tools before deploying: Testing and Debugging Agent Skills Before You Deploy
Add Gemini to the set: Agent Skills with Google Gemini: Function Calling Guide
Unified SDK for all providers: Vercel AI SDK Tools: One API for Claude and OpenAI Skills
Claude version: Agent Skills with the Claude API
Related Reading.
Agent Skills with Google Gemini: Function Calling Guide
Complete guide to Gemini function calling — define tools, handle function_call responses, return results, and compare syntax with Claude and OpenAI. Node.js.
Vercel AI SDK Tools: One API for Claude and OpenAI Skills
Vercel AI SDK's unified tool interface works with Claude, OpenAI, and Gemini. Write your skill once and switch AI providers without rewriting the agent loop.
Build a GitHub Issue Creator Skill for Your AI Agent
Create a production-ready agent skill that creates GitHub issues from natural language, with label assignment, duplicate detection, and dry-run mode.