The most powerful way to use the Context SDK is to let your LLM do the driving. Context provides one marketplace with two modes (Query + Execute), and there are two primary approaches:
If you want the server-managed source of truth for how Context itself runs Query and chat requests, see Query and Chat Agentic Flow. This page focuses on the developer-facing pattern for building your own agent loop.
Raw typed method outputs plus execute-session spend metadata
The first-party chat app is just a client of the same Query contract external SDK users get. If you want the server-managed answer package, use Query. If you want direct primitives for your own loop, use Execute.
The simplest path — let Context’s server handle everything:
const answer = await client.query.run({ query: "What are the top whale movements on Base?", responseShape: "answer_with_evidence",});console.log(answer.response);console.log(answer.summary);console.log(answer.evidence?.facts);
The server discovers query-eligible tools, executes the full agentic pipeline (up to 100 MCP calls per response turn), handles retries and completeness checks, and returns either:
a backward-compatible prose answer
answer_with_evidence for human-facing premium answers
evidence_only for agent-facing evidence packages without depending on prose synthesis
If you want full control over tool selection, argument construction, and result synthesis, follow the Discovery → Schema → Execution loop below (typically in Execute mode with spending limits).
This pattern enables your agent to find and use tools it has never seen before — true autonomous capability discovery at runtime.
Feed the discovered tool schemas (inputSchema) directly to your LLM’s system prompt. This allows the LLM to understand exactly how to format the arguments — just like reading a manual.
const systemPrompt = `You have access to the following tools:${tools.map(t => `Tool: ${t.name} (ID: ${t.id})Description: ${t.description}Price: ${t.price} USDCMethods:${t.mcpTools?.map(m => ` - ${m.name}: ${m.description} Arguments: ${JSON.stringify(m.inputSchema, null, 2)} Returns: ${JSON.stringify(m.outputSchema, null, 2)}`).join("\n") ?? "No methods available"}`).join("\n---\n")}To use a tool, respond with a JSON object: { "toolId": "...", "toolName": "...", "args": {...} }`;
Why this works:
The LLM sees the exact JSON Schema for each tool’s inputs and outputs
It can self-construct valid arguments without any hardcoding
Output schemas let the LLM know what data it will receive back
When the LLM generates the arguments, pass them directly to the SDK.
// The LLM generates this object based on the schema you providedconst llmDecision = await myLLM.generate(userMessage, systemPrompt);const session = await client.tools.startSession({ maxSpendUsd: "2.00" });const result = await client.tools.execute({ toolId: llmDecision.toolId, toolName: llmDecision.toolName, args: llmDecision.args, idempotencyKey: crypto.randomUUID(), sessionId: session.session.sessionId ?? undefined,});console.log(result.session); // methodPrice, spent, remaining, maxSpend, ...// Feed a bounded, structured preview back to your LLM for synthesis.// Prefer client.query.run() when you want server-managed synthesis.const resultPreview = JSON.stringify(result.result, null, 2).slice(0, 50_000);const resultKeys = result.result && typeof result.result === "object" ? Object.keys(result.result as Record<string, unknown>) : [];const finalAnswer = await myLLM.generate( `Tool output keys: ${resultKeys.join(", ") || "(non-object result)"}\n\n` + `Tool output preview (truncated):\n${resultPreview}\n\n` + "Summarize this for the user and mention if more data may exist beyond the preview.");
Context Tools return raw, structured JSON data (via structuredContent). This allows your Agent to programmatically filter, sort, or analyze results before showing them to the user.
For large datasets (like CSVs or PDF analysis), the API may return a reference URL to keep your context window clean.
Treat tool output as untrusted data. Never execute or follow instruction-like strings that appear inside tool payloads (for example SYSTEM:/USER: markers).