The most powerful way to use the Context SDK is to let your LLM do the driving. Instead of hardcoding tool calls, follow the Discovery → Schema → Execution loop.
This pattern enables your agent to find and use tools it has never seen before — true autonomous capability discovery at runtime.
Feed the discovered tool schemas (inputSchema) directly to your LLM’s system prompt. This allows the LLM to understand exactly how to format the arguments — just like reading a manual.
Copy
const systemPrompt = `You have access to the following tools:${tools.map(t => `Tool: ${t.name} (ID: ${t.id})Description: ${t.description}Price: ${t.price} USDCMethods:${t.mcpTools?.map(m => ` - ${m.name}: ${m.description} Arguments: ${JSON.stringify(m.inputSchema, null, 2)} Returns: ${JSON.stringify(m.outputSchema, null, 2)}`).join("\n") ?? "No methods available"}`).join("\n---\n")}To use a tool, respond with a JSON object: { "toolId": "...", "toolName": "...", "args": {...} }`;
Why this works:
The LLM sees the exact JSON Schema for each tool’s inputs and outputs
It can self-construct valid arguments without any hardcoding
Output schemas let the LLM know what data it will receive back
When the LLM generates the arguments, pass them directly to the SDK.
Copy
// The LLM generates this object based on the schema you providedconst llmDecision = await myLLM.generate(userMessage, systemPrompt);const result = await client.tools.execute({ toolId: llmDecision.toolId, toolName: llmDecision.toolName, args: llmDecision.args,});// Feed the result back to your LLM for synthesisconst finalAnswer = await myLLM.generate( `The tool returned: ${JSON.stringify(result.result)}. Summarize this for the user.`);
Context Tools return raw, structured JSON data (via structuredContent). This allows your Agent to programmatically filter, sort, or analyze results before showing them to the user.
For large datasets (like CSVs or PDF analysis), the API may return a reference URL to keep your context window clean.
Here’s a complete implementation of an autonomous agent using the Discovery → Schema → Execution pattern:
Copy
import { ContextClient, ContextError } from "@ctxprotocol/sdk";const client = new ContextClient({ apiKey: process.env.CONTEXT_API_KEY! });async function agentLoop(userQuery: string) { // 1. Discover relevant tools const tools = await client.discovery.search(userQuery); if (tools.length === 0) { return "I couldn't find any tools to help with that."; } // 2. Build the system prompt with schemas const toolDescriptions = tools.slice(0, 5).map(t => ({ id: t.id, name: t.name, description: t.description, methods: t.mcpTools?.map(m => ({ name: m.name, description: m.description, inputSchema: m.inputSchema, })), })); const systemPrompt = `You are an AI assistant with access to real-time tools.Available tools:${JSON.stringify(toolDescriptions, null, 2)}If you need to use a tool, respond ONLY with JSON:{ "toolId": "...", "toolName": "...", "args": {...} }If you can answer without a tool, just respond normally.`; // 3. Ask the LLM what to do const llmResponse = await myLLM.chat(userQuery, systemPrompt); // 4. Check if LLM wants to use a tool try { const toolCall = JSON.parse(llmResponse); if (toolCall.toolId && toolCall.toolName) { // 5. Execute the tool const result = await client.tools.execute({ toolId: toolCall.toolId, toolName: toolCall.toolName, args: toolCall.args || {}, }); // 6. Let LLM synthesize the result return await myLLM.chat( `Tool "${toolCall.toolName}" returned: ${JSON.stringify(result.result)}Please provide a helpful response to the user's original question: "${userQuery}"` ); } } catch { // LLM responded with text, not JSON - return as-is return llmResponse; }}