Installation
With optional FastAPI support:
pip install ctxprotocol[fastapi]
Requirements
- Python 3.10+
- httpx (async HTTP)
- pydantic (type validation)
- pyjwt[crypto] (JWT verification)
Prerequisites
Before using the API, complete setup at ctxprotocol.com:
Sign in
Creates your embedded wallet
Set spending cap
Approve USDC spending on the ContextRouter (one-time setup)
Fund wallet
Add USDC for tool execution fees
Generate API key
In Settings page
Two SDK Modes
The SDK offers two payment models to serve different use cases:
| Mode | Method | Payment Model | Use Case |
|---|
| Execute | client.tools.execute() | Pay-per-request | Simple data fetches, predictable costs, building custom pipelines |
| Query | client.query.run() | Pay-per-response | Complex questions, multi-tool synthesis, curated intelligence |
Which should I use? Use Query when you want a curated answer to a complex question — the server handles tool discovery, multi-tool orchestration (up to 100 MCP calls per tool), self-healing retries, and AI synthesis for one flat fee. Use Execute when you want raw data from a specific tool with full control over the pipeline.
Quick Start
import asyncio
from ctxprotocol import ContextClient
async def main():
async with ContextClient(api_key="sk_live_...") as client:
# Pay-per-response: Ask a question, get a curated answer
answer = await client.query.run("What are the top whale movements on Base?")
print(answer.response)
# Pay-per-request: Execute a specific tool for raw data
tools = await client.discovery.search("gas prices")
result = await client.tools.execute(
tool_id=tools[0].id,
tool_name=tools[0].mcp_tools[0].name,
args={"chainId": 1},
)
print(result.result)
asyncio.run(main())
Configuration
Client Options
| Option | Type | Required | Default | Description |
|---|
api_key | str | Yes | — | Your Context Protocol API key |
base_url | str | No | https://ctxprotocol.com | API base URL (for development) |
import os
from ctxprotocol import ContextClient
# Production
client = ContextClient(api_key=os.environ["CONTEXT_API_KEY"])
# Local development
client = ContextClient(
api_key="sk_test_...",
base_url="http://localhost:3000",
)
Always use async with context manager or call await client.close() when done to properly release resources.
The Python SDK automatically retries transient failures (HTTP 5xx, transport errors, and timeouts) with exponential backoff.
API Reference
Discovery
client.discovery.search(query, limit?)
Search for tools matching a query string.
Parameters:
| Parameter | Type | Required | Description |
|---|
query | str | Yes | Search query |
limit | int | No | Maximum results to return (1-50) |
Returns: list[Tool]
tools = await client.discovery.search("ethereum gas", limit=10)
client.discovery.get_featured(limit?)
Get featured/popular tools.
Parameters:
| Parameter | Type | Required | Description |
|---|
limit | int | No | Maximum results to return |
Returns: list[Tool]
featured = await client.discovery.get_featured(limit=5)
Execute a single tool method. One call, one payment, raw result.
Parameters:
| Option | Type | Required | Description |
|---|
tool_id | str | Yes | UUID of the tool |
tool_name | str | Yes | Name of the method to call |
args | dict | No | Arguments matching the tool’s inputSchema |
idempotency_key | str | No | Optional idempotency key (UUID recommended) |
Returns: ExecutionResult
result = await client.tools.execute(
tool_id="uuid-of-tool",
tool_name="get_gas_prices",
args={"chainId": 1},
idempotency_key="2bb4bdcb-8609-43f6-af13-75279186de70",
)
Query (Pay-Per-Response)
The Query API is Context’s response marketplace — instead of buying raw API calls, you’re buying curated intelligence. Ask a question, pay once, and get an AI-synthesized answer backed by multi-tool data aggregation, error recovery, and completeness checks.
Run an agentic query. The server discovers tools, executes the full pipeline (up to 100 MCP calls per tool), applies model-aware mediator/data budgeting, and returns an AI-synthesized answer. Payment is settled after successful execution.
Parameters:
| Option | Type | Required | Description |
|---|
query | str | Yes | Natural-language question |
tools | list[str] | No | Tool IDs to use (auto-discover if omitted) |
model_id | str | No | Model ID to use for planning/synthesis (e.g. kimi-model-thinking, glm-model) |
include_data | bool | No | Include execution data inline in the response |
include_data_url | bool | No | Persist execution data to blob and return a URL |
idempotency_key | str | No | Optional idempotency key (UUID recommended) |
Returns: QueryResult
answer = await client.query.run("What are the top whale movements on Base?")
print(answer.response) # AI-synthesized text
print(answer.tools_used) # [QueryToolUsage(id, name, skill_calls)]
print(answer.cost) # QueryCost(model_cost_usd, tool_cost_usd, total_cost_usd)
answer = await client.query.run(
query="Analyze whale activity on Base",
model_id="glm-model",
include_data=True,
include_data_url=True,
idempotency_key="6e7f1389-f72f-41d9-bf26-0608a4d8be87",
)
model_id lets headless users choose the orchestration/synthesis model explicitly. If omitted, the API uses its default model.Current platform IDs: kimi-model-thinking, glm-model, gemini-flash-model, claude-sonnet-model, claude-opus-model.
Same as run() but streams events in real-time via SSE.
Returns: AsyncGenerator of stream events
async for event in client.query.stream("What are the top whale movements?"):
if event.type == "tool-status":
print(f"Tool {event.tool.name}: {event.status}")
elif event.type == "text-delta":
print(event.delta, end="")
elif event.type == "done":
print(f"\nTotal cost: {event.result.cost.total_cost_usd}")
Use the same idempotency_key when retrying the same logical request after network or timeout errors.
Types
Import Types
from ctxprotocol import (
# Auth utilities for tool contributors
verify_context_request,
is_protected_mcp_method,
is_open_mcp_method,
# Client types
ContextClientOptions,
Tool,
McpTool,
ExecuteOptions,
ExecutionResult,
ContextErrorCode,
# Auth types (for MCP server contributors)
VerifyRequestOptions,
# Context types (for MCP server contributors receiving injected data)
ContextRequirementType,
HyperliquidContext,
PolymarketContext,
WalletContext,
UserContext,
)
class Tool(BaseModel):
id: str
name: str
description: str
price: str
category: str | None
is_verified: bool | None
mcp_tools: list[McpTool] | None
class McpTool(BaseModel):
name: str
description: str
input_schema: dict[str, Any] | None # JSON Schema for arguments
output_schema: dict[str, Any] | None # JSON Schema for response
meta: dict[str, Any] | None # alias: "_meta" (context + pacing metadata)
ExecutionResult (Pay-Per-Request)
class ExecutionResult(BaseModel):
result: Any
tool: ToolInfo # { id: str, name: str }
duration_ms: int
QueryResult (Pay-Per-Response)
class QueryResult(BaseModel):
response: str # AI-synthesized answer
tools_used: list[QueryToolUsage] # [{ id, name, skill_calls }]
cost: QueryCost # { model_cost_usd, tool_cost_usd, total_cost_usd }
duration_ms: int
data: Any | None # Optional execution data (include_data=True)
data_url: str | None # Optional blob URL (include_data_url=True)
Context Requirement Types
For MCP server contributors building tools that need user context:
Why Context Injection Matters:
- No Auth Required: Public blockchain/user data is fetched by the platform
- Security: Your MCP server never handles private keys
- Simplicity: You receive structured, type-safe data
from ctxprotocol import CONTEXT_REQUIREMENTS_KEY
# Context types supported by the marketplace
ContextRequirementType = Literal["polymarket", "hyperliquid", "wallet"]
# Usage: Declare context requirements in _meta at the tool level
TOOLS = [{
"name": "analyze_my_positions",
"description": "Analyze your positions with personalized insights",
"_meta": {
"contextRequirements": ["hyperliquid"],
"rateLimit": {
"maxRequestsPerMinute": 30,
"cooldownMs": 2000,
"maxConcurrency": 1,
"supportsBulk": True,
"recommendedBatchTools": ["get_portfolio_snapshot"],
"notes": "Hobby tier: prefer snapshot endpoints over loops.",
},
},
"inputSchema": {
"type": "object",
"properties": {
"portfolio": {
"type": "object",
"description": "Portfolio context (injected by platform)",
},
},
"required": ["portfolio"],
},
}]
Injected Context Types
HyperliquidContext
class HyperliquidContext(BaseModel):
wallet_address: str
perp_positions: list[HyperliquidPerpPosition]
spot_balances: list[HyperliquidSpotBalance]
open_orders: list[HyperliquidOrder]
account_summary: HyperliquidAccountSummary
fetched_at: str
PolymarketContext
class PolymarketContext(BaseModel):
wallet_address: str
positions: list[PolymarketPosition]
open_orders: list[PolymarketOrder]
total_value: float | None
fetched_at: str
WalletContext
class WalletContext(BaseModel):
address: str
chain_id: int
native_balance: str | None
Error Handling
The SDK raises ContextError with specific error codes:
from ctxprotocol import ContextClient, ContextError
try:
result = await client.tools.execute(...)
except ContextError as e:
match e.code:
case "no_wallet":
# User needs to set up wallet
print(f"Setup required: {e.help_url}")
case "insufficient_allowance":
# User needs to set a spending cap
print(f"Set spending cap: {e.help_url}")
case "payment_failed":
# Insufficient USDC balance
pass
case "execution_failed":
# Tool execution error
pass
Error Codes
| Code | Description | Handling |
|---|
unauthorized | Invalid API key | Check configuration |
no_wallet | Wallet not set up | Direct user to help_url |
insufficient_allowance | Spending cap not set | Direct user to help_url |
payment_failed | USDC payment failed | Check balance |
execution_failed | Tool error | Retry with different args |
If you’re building an MCP server, verify incoming requests using ctxprotocol.
Free vs Paid Security Requirements:| Tool Type | Security Middleware | Rationale |
|---|
| Free Tools ($0.00) | Optional | Great for distribution and adoption |
| Paid Tools ($0.01+) | Mandatory | We cannot route payments to insecure endpoints |
Option 1: FastMCP (Recommended)
FastMCP is the fastest way to build MCP servers. Use ctxprotocol middleware:
from fastmcp import FastMCP
from fastmcp.server.middleware import Middleware, MiddlewareContext
from fastmcp.server.dependencies import get_http_headers
from fastmcp.exceptions import ToolError
from ctxprotocol import verify_context_request, ContextError
mcp = FastMCP("my-tool")
class ContextProtocolAuth(Middleware):
"""Verify Context Protocol JWT on tool calls only."""
async def on_call_tool(self, context: MiddlewareContext, call_next):
headers = get_http_headers()
try:
await verify_context_request(
authorization_header=headers.get("authorization", "")
)
except ContextError as e:
raise ToolError(f"Unauthorized: {e.message}")
return await call_next(context)
mcp.add_middleware(ContextProtocolAuth())
@mcp.tool
def get_data(query: str) -> dict:
return {"result": "..."}
if __name__ == "__main__":
mcp.run(transport="http", port=3000)
FastMCP auto-generates outputSchema from Pydantic return types and includes structuredContent in responses - both required by Context Protocol.
Option 2: Raw FastAPI
For more control, use FastAPI with our middleware:
from fastapi import FastAPI, Request, Depends, HTTPException
from ctxprotocol import create_context_middleware, ContextError
app = FastAPI()
verify_context = create_context_middleware(audience="https://your-tool.com/mcp")
@app.post("/mcp")
async def handle_mcp(request: Request, context: dict = Depends(verify_context)):
# context contains verified JWT payload (on protected methods)
# None for open methods like tools/list
body = await request.json()
# Handle MCP request...
Manual Verification
For more control, use the lower-level utilities:
from ctxprotocol import verify_context_request, is_protected_mcp_method, ContextError
# Check if a method requires auth
if is_protected_mcp_method(body["method"]):
try:
payload = await verify_context_request(
authorization_header=request.headers.get("authorization"),
audience="https://your-tool.com/mcp", # optional
)
# payload contains verified JWT claims
except ContextError:
raise HTTPException(status_code=401, detail="Unauthorized")
Verification Options
| Option | Type | Required | Description |
|---|
authorization_header | str | Yes | Full Authorization header (e.g., "Bearer eyJ...") |
audience | str | No | Expected audience claim for stricter validation |
MCP Security Model
Critical for tool contributors: Not all MCP methods require authentication. The middleware selectively protects only execution methods.
| MCP Method | Auth Required | Why |
|---|
initialize | ❌ No | Session setup |
tools/list | ❌ No | Discovery - agents need to see your schemas |
resources/list | ❌ No | Discovery |
prompts/list | ❌ No | Discovery |
tools/call | ✅ Yes | Execution - costs money, runs your code |
What this means in practice:
- ✅
https://your-mcp.com/mcp + initialize → Works without auth
- ✅
https://your-mcp.com/mcp + tools/list → Works without auth
- ❌
https://your-mcp.com/mcp + tools/call → Requires Context Protocol JWT
This matches standard API patterns (OpenAPI schemas are public, GraphQL introspection is open).
Payment Flow
Context supports two settlement timings:
- Query mode (
client.query.*) uses deferred settlement after the response is delivered
- Execute mode (
client.tools.execute) currently settles before the single MCP call
- In both modes, spending caps are enforced via ContextRouter allowance checks
- 90% goes to the tool developer, 10% goes to the protocol
Links