M
MeshWorld.
Agent Skills Agentic AI Tutorial Node.js Skill Chaining Intermediate Claude 8 min read

Chaining Agent Skills: Research, Summarize, and Save

By Vishnu Damwala

I typed one message: “Research TypeScript 5.5 release notes, summarize the key features, and save the summary to my notes folder.”

Three separate skills ran. The agent searched the web, took those results and summarized them, then saved the summary to a file. It figured out the order itself. I didn’t tell it which tools to use or in what sequence.

This is skill chaining — and it’s the point where an AI agent stops feeling like a chatbot and starts feeling like a capable assistant.


What chaining means

Chaining is not something you configure. It emerges naturally when:

  1. You define multiple skills with clear descriptions
  2. A user request requires information or output from one skill to perform another
  3. The model reasons about the sequence based on those descriptions

Your job is not to tell the model “first search, then summarize, then save.” Your job is to make each skill’s description clear enough that the model figures this out on its own.

The research chain we’re building has three skills:

  • web_search(query) — fetch real-time information from the web
  • write_file(path, content) — save text to a file (from the file system skills post)
  • summarize_text(text, focus?) — distill a long text to the key points

We’ll use DuckDuckGo’s Lite endpoint — it’s free, no API key required, and returns structured results.

// web-search.js
export async function web_search({ query, maxResults = 5 }) {
  if (!query) return { error: "Query is required." };

  try {
    // DuckDuckGo HTML endpoint — parse the Lite version
    const response = await fetch(
      `https://html.duckduckgo.com/html/?q=${encodeURIComponent(query)}&kl=us-en`,
      {
        headers: {
          "User-Agent": "Mozilla/5.0 (compatible; AgentBot/1.0)"
        }
      }
    );

    if (!response.ok) {
      return { error: `Search failed: HTTP ${response.status}` };
    }

    const html = await response.text();

    // Extract results with a simple regex (works with DuckDuckGo Lite HTML structure)
    const titleRegex = /<a class="result__a"[^>]*href="([^"]*)"[^>]*>([^<]+)<\/a>/g;
    const snippetRegex = /<a class="result__snippet"[^>]*>([^<]+)<\/a>/g;

    const titles = [...html.matchAll(titleRegex)].map(m => ({ url: m[1], title: m[2].trim() }));
    const snippets = [...html.matchAll(snippetRegex)].map(m => m[1].trim());

    const results = titles.slice(0, maxResults).map((item, i) => ({
      title: item.title,
      url: item.url,
      snippet: snippets[i] ?? ""
    }));

    if (!results.length) {
      return { error: "No results found. Try a different search query." };
    }

    return { query, results, count: results.length };
  } catch (err) {
    return { error: `Search error: ${err.message}` };
  }
}

Note: DuckDuckGo’s HTML structure changes occasionally. For production use, consider using SerpAPI, Bing Search API, or Brave Search API — all have free tiers. The pattern is identical.


Building summarize_text

This skill calls the Claude API internally — a skill that uses an AI to process data for another AI response. This is a valid and useful pattern.

// summarize.js
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

export async function summarize_text({ text, focus, maxLength = 300 }) {
  if (!text) return { error: "Text is required." };
  if (text.length < 100) return { summary: text, note: "Text was short enough — returned as-is." };

  const prompt = focus
    ? `Summarize the following text, focusing specifically on: ${focus}\n\nText:\n${text}`
    : `Summarize the following text in clear, concise bullet points:\n\n${text}`;

  try {
    const response = await client.messages.create({
      model: "claude-haiku-4-5-20251001",  // Use Haiku — cheaper for summarization tasks
      max_tokens: maxLength,
      messages: [{ role: "user", content: prompt }]
    });

    return {
      summary: response.content[0].text,
      originalLength: text.length,
      focus: focus ?? "general"
    };
  } catch (err) {
    return { error: `Summarization failed: ${err.message}` };
  }
}

Using claude-haiku here instead of Sonnet keeps costs down — summarization doesn’t require the strongest model.


Tool definitions

export const researchTools = [
  {
    name: "web_search",
    description:
      "Search the web for current information on a topic. " +
      "Use this when the user asks about recent events, release notes, documentation, " +
      "or anything that may have changed after your training cutoff.",
    input_schema: {
      type: "object",
      properties: {
        query: { type: "string", description: "The search query" },
        maxResults: { type: "number", description: "Max results to return (default 5)" }
      },
      required: ["query"]
    }
  },
  {
    name: "summarize_text",
    description:
      "Summarize a long piece of text into key points. " +
      "Use after retrieving content from web search or reading a file, " +
      "when the content is too long to use directly in a response.",
    input_schema: {
      type: "object",
      properties: {
        text: { type: "string", description: "The text to summarize" },
        focus: { type: "string", description: "Optional: what aspect to focus on" },
        maxLength: { type: "number", description: "Max response length in tokens (default 300)" }
      },
      required: ["text"]
    }
  },
  {
    name: "write_file",
    description:
      "Save text content to a file. Use to persist research summaries, notes, or reports. " +
      "Use mode 'create' for new files, 'append' to add to existing ones.",
    input_schema: {
      type: "object",
      properties: {
        path: { type: "string", description: "File path relative to notes directory" },
        content: { type: "string", description: "Text content to save" },
        mode: { type: "string", enum: ["create", "append", "overwrite"], description: "Write mode (default: create)" }
      },
      required: ["path", "content"]
    }
  }
];

The full agent loop, annotated

Here’s exactly what happens step-by-step when the chain runs:

// researcher.js
import Anthropic from "@anthropic-ai/sdk";
import { web_search } from "./web-search.js";
import { summarize_text } from "./summarize.js";
import { write_file } from "./file-skills.js";

const client = new Anthropic();

const toolFunctions = { web_search, summarize_text, write_file };

async function research(topic) {
  const userMessage = `Research "${topic}", summarize the key points, and save the summary to notes/${topic.replace(/\s+/g, "-").toLowerCase()}.md`;

  console.log(`\nResearching: ${topic}\n`);

  const messages = [{ role: "user", content: userMessage }];

  let response = await client.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 2048,
    tools: researchTools,
    messages
  });

  let step = 1;

  // The loop runs until stop_reason is "end_turn" (no more tool calls)
  while (response.stop_reason === "tool_use") {
    // There may be multiple tool_use blocks in one response
    const toolBlocks = response.content.filter(b => b.type === "tool_use");

    const toolResults = [];

    for (const toolBlock of toolBlocks) {
      console.log(`Step ${step}: ${toolBlock.name}(${JSON.stringify(toolBlock.input).slice(0, 80)}...)`);

      const fn = toolFunctions[toolBlock.name];
      const result = fn ? await fn(toolBlock.input) : { error: `Unknown tool: ${toolBlock.name}` };

      console.log(`  → ${JSON.stringify(result).slice(0, 120)}`);

      toolResults.push({
        type: "tool_result",
        tool_use_id: toolBlock.id,
        content: JSON.stringify(result)
      });

      step++;
    }

    messages.push(
      { role: "assistant", content: response.content },
      { role: "user", content: toolResults }
    );

    response = await client.messages.create({
      model: "claude-sonnet-4-6",
      max_tokens: 2048,
      tools: researchTools,
      messages
    });
  }

  const finalText = response.content.find(b => b.type === "text")?.text ?? "";
  console.log(`\nFinal response:\n${finalText}`);
  return finalText;
}

// Run it
const topic = process.argv[2] ?? "TypeScript 5.5 features";
research(topic);

Run it:

node researcher.js "TypeScript 5.5 features"
node researcher.js "Node.js 22 release"
node researcher.js "OpenAI GPT-4o mini"

The console output shows each step:

Researching: TypeScript 5.5 features

Step 1: web_search({"query":"TypeScript 5.5 features release notes"}...)
  → {"query":"TypeScript 5.5 features release notes","results":[{"title":"Announcing TypeScript 5.5"...

Step 2: summarize_text({"text":"TypeScript 5.5 introduces inferred type predicates...","focus":"key new features"}...)
  → {"summary":"Key features in TypeScript 5.5:\n- Inferred type predicates..."}

Step 3: write_file({"path":"notes/typescript-5.5-features.md","content":"# TypeScript 5.5 Features\n\n..."}...)
  → {"written":true,"path":"notes/typescript-5.5-features.md"}

Final response:
I've researched TypeScript 5.5 and saved a summary to notes/typescript-5.5-features.md.
Key highlights: inferred type predicates, control flow narrowing improvements...

Parallel vs sequential chaining

Sometimes the model can fire two skills at once instead of waiting:

Sequential (forced): When the output of skill A is the input to skill B, they must run in order. The model handles this automatically — it calls web_search, waits for the result, then calls summarize_text with that result.

Parallel: When two skills are independent, Claude may call them both in the same response. You’ll see multiple tool_use blocks in response.content. The for loop in the example above handles this — it processes all tool blocks before adding a single response to the message history.

This is why you should always loop over toolBlocks = response.content.filter(b => b.type === "tool_use") rather than just taking the first one.


Error propagation in chains

What happens when step 2 fails?

If web_search returns { error: "No results found" } and you pass that directly to summarize_text, the model receives an error string as input to the summarizer. It should recognise this and stop rather than summarizing the error — but only if your descriptions say so.

Add a guard to the summarize_text description:

"Do NOT summarize error messages or empty content. If the input is an error, report it to the user instead."

For production chains, also add early-exit logic in the loop:

for (const toolBlock of toolBlocks) {
  const result = fn ? await fn(toolBlock.input) : { error: `Unknown tool: ${toolBlock.name}` };

  // If a critical step fails, add the error and break — let the model decide what to do next
  if (result.error && toolBlock.name === "web_search") {
    toolResults.push({
      type: "tool_result",
      tool_use_id: toolBlock.id,
      content: JSON.stringify({ ...result, hint: "Search failed. Consider a different query or inform the user." })
    });
  } else {
    toolResults.push({
      type: "tool_result",
      tool_use_id: toolBlock.id,
      content: JSON.stringify(result)
    });
  }
}

The hint field in the result gives the model a nudge on how to handle the failure.


What’s next

Build a real-world GitHub issue creator: Build a GitHub Issue Creator Skill for Your AI Agent

Unify Claude and OpenAI with one tool API: Vercel AI SDK Tools: One API for Claude and OpenAI Skills

Handle errors when a step in the chain fails: Handling Errors in Agent Skills: Retries and Fallbacks