Skip to main content
Looking for Python? Check out the Python SDK Reference.

Installation

npm install @ctxprotocol/sdk

Requirements

  • Node.js 18+ (for native fetch)
  • TypeScript 5+ (recommended)

Prerequisites

Before using the API, complete setup at ctxprotocol.com:
1

Sign in

Creates your embedded wallet
2

Set spending cap

Approve USDC spending on the ContextRouter (one-time setup)
3

Fund wallet

Add USDC for tool execution fees
4

Generate API key

In Settings page

Quick Start

import { ContextClient } from "@ctxprotocol/sdk";

const client = new ContextClient({
  apiKey: "sk_live_...",
});

const answer = await client.query.run({
  query: "What are the top whale movements on Base?",
  responseShape: "answer_with_evidence",
});

console.log(answer.response);
console.log(answer.summary);
console.log(answer.evidence?.facts);
Want per-call pricing and spending limits? The SDK also supports Execute mode for direct method calls inside session budgets. See Two SDK Modes below.

Two SDK Modes

The SDK offers two payment models:
ModeMethodPayment ModelUse Case
Queryclient.query.run()Pay-per-responseComplex questions, multi-tool synthesis, curated intelligence
Executeclient.tools.execute()Per call (with spending limit)Deterministic pipelines, raw outputs, explicit cost control
You have access to both modes — pick the one that fits your use case.
  • Use Query (client.query.run()) when you want a managed librarian contract — Context handles discovery/orchestration (up to 100 MCP calls per response turn) and can return plain answer, answer_with_evidence, or evidence_only. Pay-per-response (~$0.10).
  • Use Execute (client.tools.execute()) when your app/agent is the librarian and you want per-call pricing with spending limits (~$0.001/call).
Most developers start with Query and add Execute later for specific pipelines that need raw data or explicit cost control. You can use both in the same application.

Execute Quick Start

const executeTools = await client.discovery.search({
  query: "gas prices",
  mode: "execute",
  surface: "execute",
  requireExecutePricing: true,
});

const method = executeTools[0]?.mcpTools?.[0];
if (!method) throw new Error("No execute method available");

const session = await client.tools.startSession({ maxSpendUsd: "1.00" });
const result = await client.tools.execute({
  toolId: executeTools[0].id,
  toolName: method.name,
  args: { chainId: 1 },
  sessionId: session.session.sessionId ?? undefined,
});
console.log(result.result);
console.log(result.session); // methodPrice, spent, remaining, maxSpend, status...
Full working example: See examples/client/src/execute.ts for a complete Execute-mode client with multi-call session management and spend tracking.
Mixed listings are first-class: one listing can expose methods to both modes. Methods without explicit execute pricing remain discoverable for Query but are excluded from Execute discovery when requireExecutePricing=true.
Compatibility: payload fields like price and pricePerQuery are kept for backward compatibility. In Query mode, they represent listing-level price per response turn. A future major release can add response-named aliases (for example, pricePerResponse) before deprecating legacy names.

Configuration

Client Options

OptionTypeRequiredDefaultDescription
apiKeystringYesYour Context Protocol API key
baseUrlstringNohttps://ctxprotocol.comAPI base URL (for development)
// Production
const client = new ContextClient({
  apiKey: process.env.CONTEXT_API_KEY!,
});

// Local development
const client = new ContextClient({
  apiKey: "sk_test_...",
  baseUrl: "http://localhost:3000",
});

API Reference

Discovery

client.discovery.search(query, limit?)

client.discovery.search(options)

Search for tools matching a query string, or pass an options object for mode-aware filtering. Parameters (string signature):
ParameterTypeRequiredDescription
querystringYesSearch query
limitnumberNoMaximum results to return
Parameters (options signature):
OptionTypeRequiredDescription
querystringNoSearch query (empty for featured-style searches)
limitnumberNoMaximum results to return
mode"query" | "execute"NoDiscovery mode with billing semantics
surface"answer" | "execute" | "both"NoMethod mode filter
queryEligiblebooleanNoRequire methods that are query-safe
requireExecutePricingbooleanNoRequire explicit method execute pricing
excludeLatencyClasses("instant" | "fast" | "slow" | "streaming")[]NoExclude by latency class
excludeSlowbooleanNoConvenience filter for query mode
Returns: Promise<Tool[]>
const tools = await client.discovery.search("ethereum gas", 10);

const executeTools = await client.discovery.search({
  query: "ethereum gas",
  mode: "execute",
  surface: "execute",
  requireExecutePricing: true,
});

client.discovery.getFeatured(limit?, options?)

Get featured/popular tools. Parameters:
ParameterTypeRequiredDescription
limitnumberNoMaximum results to return
optionsOmit<SearchOptions, "query" | "limit">NoOptional mode filters
Returns: Promise<Tool[]>
const featured = await client.discovery.getFeatured(5);
const featuredExecute = await client.discovery.getFeatured(5, {
  mode: "execute",
  requireExecutePricing: true,
});

Tools (Execute Mode)

client.tools.execute(options)

Execute a single tool method. Execute calls can run inside a session budget (maxSpendUsd) with automatic payment after delivery. Parameters:
OptionTypeRequiredDescription
toolIdstringYesUUID of the tool
toolNamestringYesName of the method to call
argsobjectNoArguments matching the tool’s inputSchema
idempotencyKeystringNoOptional idempotency key (UUID recommended)
mode"execute"NoExplicit mode label (defaults to "execute")
sessionIdstringNoExecute session ID to accrue spend against
maxSpendUsdstringNoOptional inline session budget (if no sessionId)
closeSessionbooleanNoRequest session closure after this call settles
Returns: Promise<ExecutionResult>
const session = await client.tools.startSession({ maxSpendUsd: "2.50" });

const result = await client.tools.execute({
  toolId: "uuid-of-tool",
  toolName: "get_gas_prices",
  args: { chainId: 1 },
  idempotencyKey: crypto.randomUUID(),
  sessionId: session.session.sessionId ?? undefined,
});

console.log(result.method.executePriceUsd); // explicit method price
console.log(result.session); // { methodPrice, spent, remaining, maxSpend, ... }

client.tools.startSession({ maxSpendUsd })

Start an execute session budget envelope.
const started = await client.tools.startSession({ maxSpendUsd: "5.00" });
console.log(started.session.sessionId);
console.log(started.session.maxSpend);

client.tools.getSession(sessionId)

Fetch current execute session status/spend.
const status = await client.tools.getSession("sess_123");
console.log(status.session.status); // open | closed | expired
console.log(status.session.spent);

client.tools.closeSession(sessionId)

Close an execute session and trigger final flush behavior.
const closed = await client.tools.closeSession("sess_123");
console.log(closed.session.status); // closed

Query (Pay-Per-Response)

The Query API is Context’s response marketplace — instead of buying raw API calls, you’re buying curated intelligence. Ask a question, pay once, and get a managed answer contract backed by multi-tool data aggregation, error recovery, and completeness checks.

client.query.run(options)

Run an agentic query. The server applies discovery-first orchestration (discover/probe -> plan-from-evidence -> execute -> bounded fallback) with up to 100 MCP calls per response turn as a runtime safety cap, then returns the selected Query response contract (answer, answer_with_evidence, or evidence_only). The active runtime now has one real completeness-oriented deep lane plus the lower-latency fast lane. deep stays metadata-first before planning, and fast remains one-shot biased. Query billing is pay-per-response with automatic payment after delivery. Parameters:
OptionTypeRequiredDescription
querystringYesNatural-language question
toolsstring[]NoTool IDs to use (auto-discover if omitted)
answerModelIdstringNoFinal synthesis model ID (e.g. kimi-model-thinking, glm-model)
responseShape"answer" | "answer_with_evidence" | "evidence_only"NoStructured response mode for Query answers
includeDatabooleanNoInclude execution data inline in the response
includeDataUrlbooleanNoPersist execution data to blob and return a URL
includeDeveloperTracebooleanNoInclude optional developer trace + orchestration diagnostics
queryDepth"fast" | "auto" | "deep"NoQuery orchestration depth (fast lower latency, auto server-routed, deep completeness-oriented)
debugScoutDeepMode"deep"NoDevelopment/testing only internal deep lane override; legacy deep-light / deep-heavy aliases are temporarily accepted
idempotencyKeystringNoOptional idempotency key (UUID recommended)
Can also accept a plain string: client.query.run("your question") Returns: Promise<QueryResult>
const answer = await client.query.run("What are the top whale movements on Base?");
console.log(answer.response);     // response text or summary
console.log(answer.toolsUsed);    // [{ id, name, skillCalls }]
console.log(answer.cost);         // { modelCostUsd, toolCostUsd, totalCostUsd }
console.log(answer.orchestrationMetrics); // Optional first-pass / rediscovery metrics
const answer = await client.query.run({
  query: "Analyze whale activity on Base",
  answerModelId: "glm-model",
  responseShape: "answer_with_evidence",
  includeData: true,
  includeDataUrl: true,
  includeDeveloperTrace: true,
  queryDepth: "auto",
  idempotencyKey: crypto.randomUUID(),
});

console.log(answer.responseShape); // "answer_with_evidence"
console.log(answer.summary); // short machine-friendly summary
console.log(answer.evidence?.facts); // canonical evidence facts
console.log(answer.artifacts?.dataUrl); // artifact refs used by the answer package
console.log(answer.freshness?.asOf); // freshness metadata
console.log(answer.confidence?.level); // high | medium | low
console.log(answer.developerTrace?.summary); // retries/fallbacks/loops summary
console.log(answer.developerTrace?.diagnostics?.selection); // runtime lane + scout probe diagnostics
answerModelId lets headless users choose the final synthesis model explicitly. If omitted, the API uses its managed default answer model. If responseShape is evidence_only, synthesis is skipped and no answer model runs for that request.Current platform IDs: kimi-model-thinking, glm-model, gemini-flash-model, claude-sonnet-model, claude-opus-model.
queryDepth is available in both run() and stream():
  • fast: lower-latency path for simple lookups.
  • auto: server routes to either fast or deep using query intent and selected tool metadata quality.
  • deep: completeness-oriented path (default when omitted).
includeDeveloperTrace and orchestrationMetrics are optional diagnostics. debugScoutDeepMode remains test-only and is ignored by normal production usage. Inside deep, the runtime currently uses one real metadata-first deep path. Legacy deep-light and deep-heavy debug values are normalized to deep when accepted for backwards compatibility. Selection diagnostics can show initial vs final lane decisions, Scout probe adequacy, bounded pre-plan probe call counts, and whether pre-plan evidence changed the initial plan.

Structured Response Shapes

Query is Context’s managed librarian contract. You can choose how much structure you want back:
responseShapeBest forBehavior
answerBackward compatibilityNatural-language answer only
answer_with_evidenceFirst-party chat, human-facing appsProse answer plus structured evidence, artifacts, freshness, confidence, and usage metadata
evidence_onlyExternal agents, downstream automationMachine-friendly summary plus the same structured evidence package without depending on prose synthesis
The first-party chat app defaults to answer_with_evidence, but it is using the same Query contract you get in the SDK.

Query Envelope Fields

When responseShape is answer_with_evidence or evidence_only, the result may include:
FieldWhat it contains
summaryShort machine-friendly summary of the answer
evidenceCanonical facts, source refs, assumptions, known unknowns, and retrieval reason codes
artifactsdataUrl, canonical dataset metadata, and stage-artifact kinds
viewOptional UI/render hint such as table, leaderboard, heatmap, or timeseries
freshnessasOf, source timestamps, and freshness note
confidenceConfidence level, reason, fact counts, and gap signals
usageDuration, cost, tools used, outcome type, and optional orchestration metrics
const result = await client.query.run({
  query: "Which exchanges are seeing the largest BTC inflows and outflows over the last 24 hours?",
  responseShape: "evidence_only",
});

console.log(result.response); // machine-friendly summary for evidence_only
console.log(result.summary);
console.log(result.evidence?.sourceRefs);
console.log(result.usage?.toolsUsed);

High-Fidelity Rehydration (Retrieval-First Synthesis)

When retrieval-first rollout is enabled in the deployment, the query runtime can switch synthesis context assembly from baseline truncation to retrieval-first slices for full-data or truncation-sensitive requests.
  • Stage artifacts are emitted in request-scoped internal storage (selection, planning, execution, completeness, synthesis).
  • Retrieval primitives (path lookup, array windows/sampling, keyword slices, top-K relevance) are used to build a bounded context pack from canonical execution data.
  • Final synthesis still passes through the existing synthesis safety contract.
  • includeData and includeDataUrl continue to reference the same canonical execution dataset used by retrieval-first assembly.

client.query.stream(options)

Same as run() but streams events in real-time via SSE. Supports the same options as run() (tools, answerModelId, responseShape, includeData, includeDataUrl, includeDeveloperTrace, queryDepth, debugScoutDeepMode, idempotencyKey). Returns: AsyncGenerator<QueryStreamEvent>
for await (const event of client.query.stream({
  query: "What are the top whale movements?",
  queryDepth: "fast",
})) {
  switch (event.type) {
    case "tool-status":
      console.log(`Tool ${event.tool.name}: ${event.status}`);
      break;
    case "text-delta":
      process.stdout.write(event.delta);
      break;
    case "done":
      console.log("\nTotal cost:", event.result.cost.totalCostUsd);
      break;
  }
}
Use the same idempotencyKey when retrying the same logical request after network/timeout failures.
If you stream with responseShape: "evidence_only", expect the structured result on the final done event and few or no text-delta events.

Types

Import Types

import {
  // Auth utilities for tool contributors
  verifyContextRequest,
  isProtectedMcpMethod,
  isOpenMcpMethod,
} from "@ctxprotocol/sdk";

import type {
  // Client types
  ContextClientOptions,
  Tool,
  McpTool,
  McpToolMeta,
  McpToolRateLimitHints,
  ExecuteOptions,
  ExecutionResult,
  QueryOptions,
  QueryResult,
  QueryDeveloperTrace,
  QueryOrchestrationMetrics,
  QueryResponseEnvelope,
  QueryClarificationPayload,
  QueryCapabilityMissPayload,
  QueryAssumptionMetadata,
  ContextErrorCode,
  // Auth types (for MCP server contributors)
  VerifyRequestOptions,
  // Context types (for MCP server contributors receiving injected data)
  ContextRequirementType,
  HyperliquidContext,
  PolymarketContext,
  WalletContext,
  UserContext,
} from "@ctxprotocol/sdk";

Tool

interface Tool {
  id: string;
  name: string;
  description: string;
  price: string; // Listing-level response price metadata (legacy field name)
  category?: string;
  isVerified?: boolean;
  mcpTools?: McpTool[];
}

McpTool

interface McpTool {
  name: string;
  description: string;
  inputSchema?: Record<string, unknown>;   // JSON Schema for arguments
  outputSchema?: Record<string, unknown>;  // JSON Schema for response
  _meta?: McpToolMeta;                     // mode/eligibility/pricing/context metadata
  executeEligible?: boolean;               // derived discovery field
  executePriceUsd?: string | null;         // explicit execute price visibility
}

interface McpToolRateLimitHints {
  maxRequestsPerMinute?: number;
  maxConcurrency?: number;
  cooldownMs?: number;
  supportsBulk?: boolean;
  recommendedBatchTools?: string[];
  notes?: string;
}

interface McpToolMeta {
  surface?: "answer" | "execute" | "both";
  queryEligible?: boolean;
  latencyClass?: "instant" | "fast" | "slow" | "streaming";
  pricing?: {
    executeUsd?: string; // required for execute eligibility
    queryUsd?: string;   // optional metadata only in this rollout
  };
  executeEligible?: boolean;
  executePriceUsd?: string;
  contextRequirements?: ContextRequirementType[];
  rateLimit?: McpToolRateLimitHints;
  rateLimitHints?: McpToolRateLimitHints;
}
For argument guidance, use standard JSON Schema fields directly inside inputSchema properties. Put fallback values in default and sample invocations in examples. Do not rely on custom _meta.inputExamples.
const TOOLS = [{
  name: "get_price_history",
  inputSchema: {
    type: "object",
    properties: {
      symbol: { type: "string", default: "BTC", examples: ["BTC", "ETH", "SOL"] },
      interval: { type: "string", enum: ["1h", "4h", "1d"], default: "1h", examples: ["1h", "4h"] },
      limit: { type: "number", default: 100, examples: [50, 100, 200] },
    },
    required: [],
  },
}];

ExecutionResult (Execute Mode)

interface ExecutionResult<T = unknown> {
  mode: "execute";
  result: T;
  tool: { id: string; name: string };
  method: { name: string; executePriceUsd: string };
  session: ExecuteSessionSpend;
  durationMs: number;
}

ExecuteSessionSpend

interface ExecuteSessionSpend {
  mode: "execute";
  sessionId: string | null;
  methodPrice: string;
  spent: string;
  remaining: string | null;
  maxSpend: string | null;
  status?: "open" | "closed" | "expired";
  expiresAt?: string;
  closeRequested?: boolean;
  pendingAccruedCount?: number;
  pendingAccruedUsd?: string;
}

QueryResult (Pay-Per-Response)

interface QueryResult {
  response: string;                     // prose answer or machine-friendly summary
  toolsUsed: QueryToolUsage[];          // [{ id, name, skillCalls }]
  cost: QueryCost;                      // { modelCostUsd, toolCostUsd, totalCostUsd }
  durationMs: number;
  data?: unknown;                       // Optional execution data (includeData=true)
  dataUrl?: string;                     // Optional blob URL (includeDataUrl=true)
  developerTrace?: QueryDeveloperTrace; // Optional runtime trace + diagnostics
  orchestrationMetrics?: QueryOrchestrationMetrics; // Optional first-pass metrics
  responseShape?: "answer" | "answer_with_evidence" | "evidence_only";
  summary?: string;
  evidence?: QueryResponseEnvelope["evidence"];
  artifacts?: QueryResponseEnvelope["artifacts"];
  view?: QueryResponseEnvelope["view"];
  freshness?: QueryResponseEnvelope["freshness"];
  confidence?: QueryResponseEnvelope["confidence"];
  usage?: QueryResponseEnvelope["usage"];
  outcomeType: "answer" | "clarification_required" | "capability_miss";
  clarification?: QueryClarificationPayload;
  capabilityMiss?: QueryCapabilityMissPayload;
  assumptionMade?: QueryAssumptionMetadata;
}

Context Requirement Types

For MCP server contributors building tools that need user context (e.g., wallet data, portfolio positions):
Why Context Injection Matters:
  • No Auth Required: Public blockchain/user data is fetched by the platform, so you don’t need to handle API keys or user login.
  • Security: Your MCP server never handles private keys or sensitive credentials.
  • Simplicity: You receive structured, type-safe data directly in your tool arguments.
import type { ContextRequirementType } from "@ctxprotocol/sdk";

/** Context types supported by the marketplace */
type ContextRequirementType = "polymarket" | "hyperliquid" | "wallet";

// Usage: Declare context requirements in _meta at the tool level (MCP spec)
const TOOLS = [{
  name: "analyze_my_positions",
  description: "Analyze your positions with personalized insights",

  // ⭐ REQUIRED: Context requirements in _meta (MCP spec for arbitrary metadata)
  // The Context platform reads this to inject user data + pacing hints
  _meta: {
    contextRequirements: ["hyperliquid"] as ContextRequirementType[],
    rateLimit: {
      maxRequestsPerMinute: 30,
      cooldownMs: 2000,
      maxConcurrency: 1,
      supportsBulk: true,
      recommendedBatchTools: ["get_portfolio_snapshot"],
      notes: "Hobby tier: use snapshot methods before per-asset loops.",
    },
  },

  inputSchema: {
    type: "object",
    properties: {
      portfolio: { 
        type: "object",
        description: "Portfolio context (injected by platform)",
      },
    },
    required: ["portfolio"],
  },
  outputSchema: { /* ... */ },
}];
Why _meta at the tool level? The _meta field is part of the MCP specification for arbitrary tool metadata. The Context platform reads _meta.contextRequirements for context injection and _meta.rateLimit / _meta.rateLimitHints for planner/runtime pacing behavior. This is preserved through MCP transport because it’s a standard field.
Reference implementation: Coinglass contributor server.
For when/how to set these fields, see Tool Metadata.

Injected Context Types

HyperliquidContext

interface HyperliquidContext {
  walletAddress: string;
  perpPositions: HyperliquidPerpPosition[];
  spotBalances: HyperliquidSpotBalance[];
  openOrders: HyperliquidOrder[];
  accountSummary: HyperliquidAccountSummary;
  fetchedAt: string;
}

PolymarketContext

interface PolymarketContext {
  walletAddress: string;
  positions: PolymarketPosition[];
  openOrders: PolymarketOrder[];
  totalValue?: number;
  fetchedAt: string;
}

WalletContext

interface WalletContext {
  address: string;
  chainId: number;
  balances: TokenBalance[];
  fetchedAt: string;
}

Contributor Search Helpers

If you are building a contributor for a search-hard venue, the SDK ships an optional helper surface at @ctxprotocol/sdk/contrib/search. Use it only when the venue’s upstream search is weak enough that deterministic retrieval plus a bounded model judge materially improves candidate resolution. Do not use it for venues that already expose reliable direct search.
import {
  buildContributorSearchValidationArtifact,
  createSearchIntent,
  extractContributorSearchesFromDeveloperTrace,
  mergeContributorSearchConfig,
  resolveContributorSearch,
} from "@ctxprotocol/sdk/contrib/search";
What this module is for:
  • contributor-side intent shaping, candidate normalization, shortlist construction, and validated resolution
  • provider-agnostic judge injection with stable override knobs for provider, model, timeout, budget, and disabled
  • machine-readable artifact generation via buildContributorSearchValidationArtifact(...)
  • runtime trace inspection via extractContributorSearchesFromDeveloperTrace(trace) or result.developerTrace?.diagnostics?.contributorSearches
Operational rules:
  • extra judge spend is contributor-owned in this rollout, so recover it through your own listing response price and/or execute pricing
  • keep deterministic validation around every judge result; malformed, timed-out, over-budget, or contradictory judgments must degrade honestly
  • save replayable validation artifacts alongside your contributor examples. Current reference directories live under examples/server/polymarket-contributor/validation/ and examples/server/kalshi-contributor/validation/

Error Handling

The SDK throws ContextError with specific error codes:
import { ContextError } from "@ctxprotocol/sdk";

try {
  const result = await client.tools.execute({ ... });
} catch (error) {
  if (error instanceof ContextError) {
    switch (error.code) {
      case "no_wallet":
        // User needs to set up wallet
        console.log("Setup required:", error.helpUrl);
        break;
      case "insufficient_allowance":
        // User needs to set a spending cap
        console.log("Set spending cap:", error.helpUrl);
        break;
      case "payment_failed":
        // Insufficient USDC balance
        break;
      case "execution_failed":
        // Tool execution error
        break;
    }
  }
}

Error Codes

CodeDescriptionHandling
unauthorizedInvalid API keyCheck configuration
no_walletWallet not set upDirect user to helpUrl
insufficient_allowanceSpending cap not setDirect user to helpUrl
payment_failedUSDC payment failedCheck balance
execution_failedTool errorRetry with different args

Securing Your Tool (MCP Contributors)

If you’re building an MCP server, verify incoming requests are legitimate.
Free vs Paid Security Requirements:
Tool TypeSecurity MiddlewareRationale
Free Tools ($0.00)OptionalGreat for distribution and adoption
Paid Tools ($0.01+)MandatoryWe cannot route payments to insecure endpoints

Quick Implementation

import express from "express";
import { createContextMiddleware } from "@ctxprotocol/sdk";

const app = express();
app.use(express.json());

// 1 line of code to secure your endpoint
app.use("/mcp", createContextMiddleware());

app.post("/mcp", (req, res) => {
  // req.context contains verified JWT payload (on protected methods)
  // Handle MCP request...
});

MCP Security Model

Critical for tool contributors: Not all MCP methods require authentication. The middleware selectively protects only execution methods.
MCP MethodAuth RequiredWhy
initialize❌ NoSession setup
tools/list❌ NoDiscovery - agents need to see your schemas
resources/list❌ NoDiscovery
prompts/list❌ NoDiscovery
tools/callYesExecution - costs money, runs your code
What this means in practice:
  • https://your-mcp.com/mcp + initialize → Works without auth
  • https://your-mcp.com/mcp + tools/list → Works without auth
  • https://your-mcp.com/mcp + tools/callRequires Context Protocol JWT
This matches standard API patterns (OpenAPI schemas are public, GraphQL introspection is open).

Manual Verification

For more control, use the lower-level utilities:
import { 
  verifyContextRequest, 
  isProtectedMcpMethod, 
  ContextError 
} from "@ctxprotocol/sdk";

// Check if a method requires auth
if (isProtectedMcpMethod(body.method)) {
  const payload = await verifyContextRequest({
    authorizationHeader: req.headers.authorization,
    audience: "https://your-tool.com/mcp", // optional
  });
  // payload contains verified JWT claims
}

Verification Options

OptionTypeRequiredDescription
authorizationHeaderstringYesFull Authorization header (e.g., "Bearer eyJ...")
audiencestringNoExpected audience claim for stricter validation

Payment Flow

Context supports two settlement timings:
  1. Query mode (client.query.*) uses deferred settlement after the response is delivered
  2. Execute mode (client.tools.execute) accrues per-call method spend into execute sessions with automatic batch payment
  3. In both modes, spending caps are enforced via ContextRouter allowance checks
  4. 90% goes to the tool developer, 10% goes to the protocol