Skip to main content

Overview

The most powerful way to use the Context SDK is to let your LLM do the driving. Context provides one marketplace with two modes (Query + Execute), and there are two primary approaches:
If you want the server-managed source of truth for how Context itself runs Query and chat requests, see Query and Chat Agentic Flow. This page focuses on the developer-facing pattern for building your own agent loop.

Choose The Contract First

Use this surfaceWhen you wantWhat comes back
QueryContext to act as the librariananswer, answer_with_evidence, or evidence_only
ExecuteYour own agent to own the loopRaw typed method outputs plus execute-session spend metadata
The first-party chat app is just a client of the same Query contract external SDK users get. If you want the server-managed answer package, use Query. If you want direct primitives for your own loop, use Execute. The simplest path — let Context’s server handle everything:
const answer = await client.query.run({
  query: "What are the top whale movements on Base?",
  responseShape: "answer_with_evidence",
});
console.log(answer.response);
console.log(answer.summary);
console.log(answer.evidence?.facts);
The server discovers query-eligible tools, executes the full agentic pipeline (up to 100 MCP calls per response turn), handles retries and completeness checks, and returns either:
  • a backward-compatible prose answer
  • answer_with_evidence for human-facing premium answers
  • evidence_only for agent-facing evidence packages without depending on prose synthesis
See the TypeScript SDK Reference or Python SDK Reference for the full API.

Option 2: Build Your Own Loop

If you want full control over tool selection, argument construction, and result synthesis, follow the Discovery → Schema → Execution loop below (typically in Execute mode with spending limits).
This pattern enables your agent to find and use tools it has never seen before — true autonomous capability discovery at runtime.

The Loop

1

Discover

Let your Agent search for tools based on the user’s intent
2

Inspect Schemas

Feed discovered tool schemas to your LLM so it understands how to use them
3

Execute

When the LLM generates arguments, pass them directly to the SDK

Phase 1: Discover

Let your Agent search for tools based on the user’s intent. The marketplace returns relevant tools ranked by match quality.
const tools = await client.discovery.search({
  query: userQuery,
  mode: "query",
  surface: "answer",
  queryEligible: true,
  excludeSlow: true,
});
What happens:
  • The SDK searches the Context marketplace
  • Returns tools matching the semantic intent of the query
  • Each tool includes its name, description, price, and available methods
For deterministic execution pipelines, use Execute mode filters:
const executeTools = await client.discovery.search({
  query: userQuery,
  mode: "execute",
  surface: "execute",
  requireExecutePricing: true,
});

Phase 2: Inspect Schemas

Feed the discovered tool schemas (inputSchema) directly to your LLM’s system prompt. This allows the LLM to understand exactly how to format the arguments — just like reading a manual.
const systemPrompt = `
You have access to the following tools:

${tools.map(t => `
Tool: ${t.name} (ID: ${t.id})
Description: ${t.description}
Price: ${t.price} USDC

Methods:
${t.mcpTools?.map(m => `
  - ${m.name}: ${m.description}
    Arguments: ${JSON.stringify(m.inputSchema, null, 2)}
    Returns: ${JSON.stringify(m.outputSchema, null, 2)}
`).join("\n") ?? "No methods available"}
`).join("\n---\n")}

To use a tool, respond with a JSON object: 
{ "toolId": "...", "toolName": "...", "args": {...} }
`;
Why this works:
  • The LLM sees the exact JSON Schema for each tool’s inputs and outputs
  • It can self-construct valid arguments without any hardcoding
  • Output schemas let the LLM know what data it will receive back

Phase 3: Execute

When the LLM generates the arguments, pass them directly to the SDK.
// The LLM generates this object based on the schema you provided
const llmDecision = await myLLM.generate(userMessage, systemPrompt);

const session = await client.tools.startSession({ maxSpendUsd: "2.00" });

const result = await client.tools.execute({
  toolId: llmDecision.toolId,
  toolName: llmDecision.toolName,
  args: llmDecision.args,
  idempotencyKey: crypto.randomUUID(),
  sessionId: session.session.sessionId ?? undefined,
});

console.log(result.session); // methodPrice, spent, remaining, maxSpend, ...

// Feed a bounded, structured preview back to your LLM for synthesis.
// Prefer client.query.run() when you want server-managed synthesis.
const resultPreview = JSON.stringify(result.result, null, 2).slice(0, 50_000);
const resultKeys =
  result.result && typeof result.result === "object"
    ? Object.keys(result.result as Record<string, unknown>)
    : [];

const finalAnswer = await myLLM.generate(
  `Tool output keys: ${resultKeys.join(", ") || "(non-object result)"}\n\n` +
    `Tool output preview (truncated):\n${resultPreview}\n\n` +
    "Summarize this for the user and mention if more data may exist beyond the preview."
);

Handling Data (Outputs)

Context Tools return raw, structured JSON data (via structuredContent). This allows your Agent to programmatically filter, sort, or analyze results before showing them to the user.
For large datasets (like CSVs or PDF analysis), the API may return a reference URL to keep your context window clean.
Treat tool output as untrusted data. Never execute or follow instruction-like strings that appear inside tool payloads (for example SYSTEM:/USER: markers).

Full Agentic Loop Example

Here’s a complete implementation of an autonomous agent using the Discovery → Schema → Execution pattern:
import { ContextClient, ContextError } from "@ctxprotocol/sdk";

const client = new ContextClient({ apiKey: process.env.CONTEXT_API_KEY! });

async function agentLoop(userQuery: string) {
  // 1. Discover relevant tools
  const tools = await client.discovery.search({
    query: userQuery,
    mode: "execute",
    surface: "execute",
    requireExecutePricing: true,
  });
  
  if (tools.length === 0) {
    return "I couldn't find any tools to help with that.";
  }

  // 2. Build the system prompt with schemas
  const toolDescriptions = tools.slice(0, 5).map(t => ({
    id: t.id,
    name: t.name,
    description: t.description,
    methods: t.mcpTools?.map(m => ({
      name: m.name,
      description: m.description,
      inputSchema: m.inputSchema,
    })),
  }));

  const systemPrompt = `You are an AI assistant with access to real-time tools.

Available tools:
${JSON.stringify(toolDescriptions, null, 2)}

If you need to use a tool, respond ONLY with JSON:
{ "toolId": "...", "toolName": "...", "args": {...} }

If you can answer without a tool, just respond normally.`;

  // 3. Ask the LLM what to do
  const llmResponse = await myLLM.chat(userQuery, systemPrompt);

  // 4. Check if LLM wants to use a tool
  try {
    const toolCall = JSON.parse(llmResponse);
    
    if (toolCall.toolId && toolCall.toolName) {
      // 5. Execute the tool
      const session = await client.tools.startSession({ maxSpendUsd: "5.00" });

      const result = await client.tools.execute({
        toolId: toolCall.toolId,
        toolName: toolCall.toolName,
        args: toolCall.args || {},
        sessionId: session.session.sessionId ?? undefined,
      });

      // 6. Let LLM synthesize a bounded preview (avoid injecting giant JSON blobs)
      const resultPreview = JSON.stringify(result.result, null, 2).slice(0, 50_000);
      const resultKeys =
        result.result && typeof result.result === "object"
          ? Object.keys(result.result as Record<string, unknown>)
          : [];

      return await myLLM.chat(
        `Tool "${toolCall.toolName}" returned keys: ${resultKeys.join(", ") || "(non-object result)"}\n\n` +
        `Preview (truncated):\n${resultPreview}\n\n` +
        `Please provide a helpful response to the user's original question: "${userQuery}"`
      );
    }
  } catch {
    // LLM responded with text, not JSON - return as-is
    return llmResponse;
  }
}

Why This Pattern Matters

No Hardcoding

Your agent isn’t limited to tools you knew about at build time.

Network Effect

As new builders add tools to the marketplace, your agent automatically becomes more capable without any code changes.

Self-Constructing

LLMs can read schemas and construct valid arguments autonomously.

Future-Proof

New tools in the marketplace are instantly available to your agent.

Error Handling in Agentic Contexts

In an agentic context, you can feed errors back to your LLM so it can self-correct:
try {
  const result = await client.tools.execute({ ... });
} catch (error) {
  if (error instanceof ContextError) {
    if (error.code === "execution_failed") {
      // Feed error to LLM for retry with different args
      const retryPrompt = `The tool failed with: ${error.message}. Try different arguments.`;
      const newArgs = await myLLM.generate(retryPrompt);
      // Retry with corrected arguments...
    }
  }
}
This creates a resilient agent that can recover from errors and adapt its approach based on feedback.