Skip to main content
New to MCP? Start with our 5-Minute Quickstart to build and deploy your first server, then come back here for advanced features.

Overview

Want to earn revenue from your data? Turn the insights people pay $500/year for into $0.10/response revenue you keep. Build an MCP server and register it as an MCP Tool on the Context marketplace.

AI-Assisted Builder (TL;DR)

Have an API subscription you want to unbundle? Use Cursor, Claude, or any AI coding agent to build your MCP server automatically.
Prerequisite: You need Context7 MCP configured in your AI coding environment to fetch API documentation automatically.

Two-Step Workflow

1

Build the MCP Server

Use the MCP Builder Template as a system prompt to:
  • Fetch API docs via Context7 and discover all endpoints
  • Design โ€œgiga-brainedโ€ intelligence tools (not just API passthroughs)
  • Generate complete schemas with outputSchema
  • Implement the full MCP server
Use the mcp-builder-template.md with Context7 for [your-api-library-id].
Fetch the documentation and design tools that answer complex questions.
2

Generate Submission Details

Once your server is built, use the MCP Server Analysis Prompt to:
  • Analyze your MCP server implementation
  • Generate the perfect submission details for the marketplace form
  • Get suggested name, description, category, and pricing
Paste the mcp-server-analysis-prompt.md into your AI chat, then provide your server.ts code or repository URL.

Example Prompt for Cursor/Claude

I want to build an MCP server for the Context Marketplace using the CoinGecko API.

1. Use context7 to fetch the CoinGecko API documentation
2. Follow the mcp-builder-template.md workflow:
   - PHASE 1: Discover all endpoints
   - PHASE 2: Generate discovery questions (STOP for my review)
   - PHASE 3: Design and implement tools after I approve

Focus on "giga-brained" tools that combine multiple endpoints to answer
questions users couldn't easily answer with raw API calls.
The builder template includes checkpoints where the AI will stop and ask for your approval before proceeding. This ensures you get tools that match your vision.

Earnings Model: You earn 90% of every response fee. Set your price (e.g., $0.01/response) and get paid in USDC instantly every time an Agent calls your tool.

Step 1: Build a Standard MCP Server

Use the official @modelcontextprotocol/sdk to build your server, plus @ctxprotocol/sdk to secure your endpoint.

Install Dependencies

pnpm add @modelcontextprotocol/sdk express
pnpm add @ctxprotocol/sdk
pnpm add -D @types/express

Implement Structured Output

Required for Context: You must implement the MCP structured output standard:
  • outputSchema in your tool definitions (JSON Schema describing your response structure)
  • structuredContent in your responses (the machine-readable data matching your schema)
// MCP spec compliant server (see: modelcontextprotocol.io/specification)
const TOOLS = [{
  name: "get_gas_price",
  description: "Get current gas prices for any EVM chain",
  inputSchema: {
    type: "object",
    properties: {
      chainId: { type: "number", description: "EVM chain ID" },
    },
  },
  outputSchema: {  // ๐Ÿ‘ˆ Required by Context
    type: "object",
    properties: {
      gasPrice: { type: "number" },
      unit: { type: "string" },
    },
    required: ["gasPrice", "unit"],
  },
}];

// In your tool handler
return {
  content: [{ type: "text", text: JSON.stringify(data) }],  // Backward compat
  structuredContent: data,  // ๐Ÿ‘ˆ Required by Context
};

Secure Your Endpoint

Add Contextโ€™s middleware to verify that requests are legitimate:
import express from "express";
import { createContextMiddleware } from "@ctxprotocol/sdk";

const app = express();
app.use(express.json());

// 1 line of code to secure your endpoint & handle payments
app.use("/mcp", createContextMiddleware());

// ...

// Why this middleware matters:
// 1. Verifies that requests are signed by the Context Platform (preventing free-riding)
// 2. Injects user context (if requested)
// 3. Handles payment verification automatically

Returning Images from Tools

If your tool generates charts, heatmaps, screenshots, or other visual content, you are responsible for hosting the images and returning URLs that the AI can reference.
Do not return base64-encoded images in your tool responses. Large base64 strings bloat the response, slow down processing, and may hit token limits. Instead, host your images and return URLs.
return {
  content: [
    { type: "text", text: "Analysis complete. Here's the chart:" }
  ],
  structuredContent: {
    summary: "Market analysis shows bullish signals...",
    // Host your images and return URLs
    chart_url: "https://your-cdn.com/charts/eth-analysis-12345.png",
    chart_alt: "ETH price chart with support levels marked"
  }
};

Why You Should Host Images

  1. No database bloat โ€” URLs are small strings; base64 images are 100x larger
  2. Faster responses โ€” No large payloads to transfer
  3. Standard web pattern โ€” This is how Slack, Discord, and every major chat platform works
  4. You control caching โ€” Set your own CDN caching policies
  5. You control availability โ€” Your images, your infrastructure, your reliability

Image Hosting Options

OptionBest For
Your existing CDNIf you already have infrastructure
Cloud storage (S3, GCS, Azure Blob)Pre-signed URLs for generated content
Vercel Blob / Cloudflare R2Simple, cheap storage for generated images
Imgix / CloudinaryImage transformation and optimization

Output Schema for Image Tools

{
  "type": "object",
  "properties": {
    "summary": {
      "type": "string",
      "description": "Text summary of the analysis"
    },
    "chart_url": {
      "type": "string",
      "format": "uri",
      "description": "URL to the hosted chart image"
    },
    "chart_alt": {
      "type": "string",
      "description": "Accessible description of the image"
    }
  },
  "required": ["summary", "chart_url"]
}
The AI will include your image URL in its response, and users can click to view. For the best experience, use publicly accessible URLs (no auth required) with reasonable cache headers.
Free vs Paid Security Requirements:
Tool TypeSecurity MiddlewareRationale
Free Tools ($0.00)OptionalGreat for distribution and adoption โ€” anyone can call your endpoint
Paid Tools ($0.01+)MandatoryWe cannot route payments to insecure endpoints
If youโ€™re building a free tool, you can skip the middleware entirely. However, if you ever want to charge for your tool, youโ€™ll need to add it.

MCP Security Model

Understanding whatโ€™s protected: Not all MCP methods require authentication. Discovery methods are open so agents can find your tools, but execution requires payment verification.
MCP MethodAuth RequiredWhy
initializeโŒ NoSession setup
tools/listโŒ NoDiscovery - agents need to see your schemas
resources/listโŒ NoDiscovery
prompts/listโŒ NoDiscovery
tools/callโœ… YesExecution - costs money, runs your code
This means:
  • Anyone can call /mcp with initialize or tools/list to discover your tools
  • Only requests with a valid Context Protocol JWT can call tools/call
  • The middleware handles this automatically - you donโ€™t need to implement it yourself

Step 2: Test Your Tool Locally

Before deploying, ensure your server works as expected. You can use the official MCP Inspector or curl to test your tool locally.

Using Curl

# Test your endpoint (assuming it's running on localhost:3000)
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tools/list",
    "id": 1
  }'

Step 3: Deploy Your Server

Your server needs to be publicly accessible. We support both transport methods:
TransportURL FormatRecommendation
HTTP Streaminghttps://your-server.com/mcpโœ… Recommended
SSE (Server-Sent Events)https://your-server.com/sseSupported
Deploy to any platform: Vercel, Railway, Render, AWS, or your own infrastructure. The only requirement is a publicly accessible HTTPS endpoint.

Step 3: Register in the App

1

Go to /contribute

Navigate to the contribute page in the running Context app
2

Select MCP Tool

Choose โ€œMCP Toolโ€ (the default option)
3

Paste Your Endpoint URL

Enter your publicly accessible endpoint URL
4

Auto-Discovery

Weโ€™ll auto-discover your skills via listTools()

Step 4: Set a Price

Choose your fee per response:
PriceUse Case
$0.00Free tools (great for adoption and visibility)
$0.01+Paid tools (earn revenue per response)
This fee is paid once per chat turn. The Agent can call your skills up to 100 times within that single paid turn via callMcpSkill().
Security requirement depends on price:
  • Free tools: Security middleware is optional โ€” your endpoint works without JWT verification
  • Paid tools: Security middleware is mandatory โ€” see Secure Your Endpoint

Step 5: Stake USDC

All tools require a minimum USDC stake, enforced on-chain.
Tool TypeMinimum Stake
Free Tools$10 USDC
Paid Tools$10 USDC or 100ร— response price (whichever is higher)
Stakes are fully refundable with a 7-day withdrawal delay. This creates accountability and enables slashing for fraud.

Step 6: Youโ€™re Live! ๐ŸŽ‰

Your MCP Tool is now instantly available on the decentralized marketplace. Users can discover it via search, and AI agents can autonomously purchase and use your tool.

Updating Your Tool

When you add new endpoints, modify schemas, or change your toolโ€™s functionality:
1

Deploy Changes

Push your updated code to your server/hosting
2

Refresh Skills on Context

  1. Go to ctxprotocol.com/developer/tools โ†’ Developer Tools (My Tools)
  2. Find your tool and click โ€œRefresh Skillsโ€
  3. Context re-calls listTools() to discover changes
3

Update Description (if needed)

If youโ€™ve added significant new tools, update your description:
Donโ€™t forget to Refresh Skills! Deploying new code doesnโ€™t automatically update the marketplace listing. You must click โ€œRefresh Skillsโ€ for Context to re-discover your tools.

Schema Accuracy & Dispute Resolution

Your outputSchema isnโ€™t just documentation โ€” itโ€™s a contract.
Context uses automated schema validation as part of our crypto-native dispute resolution system:
  1. Users can dispute tool outputs by providing their transaction_hash (proof of payment)
  2. Robot judge auto-adjudicates by validating your actual output against your declared outputSchema
  3. If schema mismatches, the dispute is resolved against you automatically
  4. Repeated violations (5+ flags) lead to tool deactivation

Example: Schema Compliance

// โŒ BAD: Schema says number, but you return string
outputSchema: { temperature: { type: "number" } }
structuredContent: { temperature: "72" }  // GUILTY - schema mismatch!

// โœ… GOOD: Output matches schema exactly
outputSchema: { temperature: { type: "number" } }
structuredContent: { temperature: 72 }  // Valid
Why this matters: Unlike Web2 โ€œstar ratingsโ€ that can be gamed by bots, Context disputes require economic proof (you paid for the query). This protects honest developers from spam while ensuring bad actors face consequences.

Execution Limits & Product Design

Critical: MCP tool execution on the Context platform has a ~60 second timeout. This is intentional โ€” it shapes the marketplace toward high-quality data products.

Where the Timeout Comes From

The timeout is enforced by the platform infrastructure (and in standard MCP setups like Claude Desktop, by the LLM client itself). When your tool is called, the system waits for a response โ€” if it doesnโ€™t arrive in ~60 seconds, execution fails.
This isnโ€™t an MCP protocol limitation โ€” SSE connections can stay open indefinitely. The timeout exists at the application layer (LLM clients, API gateways, platform infrastructure) and serves as a quality forcing function.

Why the Timeout Is Actually Good

The timeout isnโ€™t a bug โ€” itโ€™s a feature that forces data brokers to build actual products instead of raw data access.
Raw Access (โŒ Bad Product)Data Broker Product (โœ… Good Product)
โ€œRun any SQL on Dune""Smart Money Wallet Tracker"
"Query 4 years of NFT data""NFT Alpha Signals"
"Scan all whale wallets""Whale Alert Feedโ€
Timeout after 60s โŒInstant response โœ…
The reframe: The best data businesses donโ€™t sell raw database access. They sell curated, pre-computed insights. This is exactly how Bloomberg, Nansen, Arkham, and Messari work.

The Data Broker Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    DATA BROKER'S JOB (offline)                      โ”‚
โ”‚                                                                     โ”‚
โ”‚  1. Run heavy queries on your data source (30 min timeout - OK)    โ”‚
โ”‚  2. Pre-compute valuable insights ("wallets that sold tops")       โ”‚
โ”‚  3. Store results in your own database                             โ”‚
โ”‚  4. Update daily/hourly via cron jobs                              โ”‚
โ”‚                                                                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    MCP TOOL (instant)                               โ”‚
โ”‚                                                                     โ”‚
โ”‚  User: "What are the smart money wallets holding?"                 โ”‚
โ”‚  Tool: SELECT * FROM my_precomputed_smart_money LIMIT 10           โ”‚
โ”‚  Response: < 1 second โœ…                                            โ”‚
โ”‚                                                                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Product Tiers

Perfect for MCP โ€” works great today:
  • Current prices, recent trades, portfolio snapshots
  • โ€œWhatโ€™s in Vitalikโ€™s wallet right now?โ€
  • โ€œGet current gas prices on Ethereumโ€
Implementation: Direct API calls that return quickly
This is the REAL product โ€” where data brokers add massive value:
  • โ€œSmart money walletsโ€ (pre-computed daily)
  • โ€œWhale alertsโ€ (pre-computed hourly)
  • โ€œNFT trending collectionsโ€ (pre-computed)
Implementation: Heavy queries run offline via cron, results served instantly via MCP
If your computation canโ€™t complete in 60 seconds, you need to pre-compute it.This is by design. The timeout forces you to build a data product, not raw data access:
Instead ofโ€ฆBuild thisโ€ฆ
โ€Scan all whale walletsโ€ (10 min)โ€œPre-computed whale alertsโ€ (instant)
โ€œAnalyze 4 years of NFT dataโ€ (30 min)โ€œDaily top-mover rankingsโ€ (instant)
โ€œRun complex ML modelโ€ (5 min)โ€œPre-scored predictions updated hourlyโ€ (instant)
The pattern: Run your heavy analysis offline (cron jobs, scheduled tasks), store the results in your own database, and serve them instantly through your MCP tool.
// โŒ BAD: Long-running analysis at request time
{ name: "analyze_all_wallets", returns: "timeout after 60s" }

// โœ… GOOD: Pre-computed results served instantly
{ name: "get_smart_money_wallets", returns: "instant response from your DB" }
This is exactly how Bloomberg, Nansen, and Arkham work โ€” the value is in the curation and pre-computation, not raw data access.

Example: Good vs Bad Tool Design

// โŒ BAD: Raw SQL tool (timeout-prone, no moat)
{
  name: "run_sql",
  description: "Run any SQL against blockchain data"
  // This is a demo, not a product
}

// โœ… GOOD: Pre-computed insight tools
{
  name: "get_smart_money_wallets",
  description: "Get top 100 wallets that historically timed market tops",
  // Data broker pre-computes this daily, serves instantly
}

{
  name: "get_whale_holdings",
  description: "Current holdings of known whale wallets",
  // Pre-computed hourly, instant response
}
The value you create: Your data brokers should be selling โ€œNansen-as-a-serviceโ€ and โ€œArkham-as-a-serviceโ€ โ€” not raw SQL access. The timeout forces this quality bar.

Why This Is BETTER for the Marketplace

Raw SQL ModelData Broker Product Model
Anyone can build (no moat)Requires expertise (defensible)
Competes on price (race to bottom)Competes on quality (premium pricing)
Users frustrated by timeoutsUsers delighted by instant results
Data broker adds no valueData broker adds massive value

Complete Server Example

Hereโ€™s a full working example of an MCP server ready for Context:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
import { createContextMiddleware } from "@ctxprotocol/sdk";

const app = express();
app.use(express.json());

// Secure endpoint with Context middleware
app.use("/mcp", createContextMiddleware());

// Define tools with outputSchema
const TOOLS = [{
  name: "get_gas_price",
  description: "Get current gas prices",
  inputSchema: {
    type: "object",
    properties: {
      chainId: { type: "number", description: "EVM chain ID" },
    },
  },
  outputSchema: {
    type: "object",
    properties: {
      gasPrice: { type: "number" },
      unit: { type: "string" },
    },
    required: ["gasPrice", "unit"],
  },
}];

// Standard MCP server setup
const server = new Server(
  { name: "my-gas-tool", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: TOOLS,
}));

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const data = await fetchGasData(request.params.arguments.chainId);
  
  return {
    content: [{ type: "text", text: JSON.stringify(data) }],
    structuredContent: data,
  };
});

app.listen(3000, () => {
  console.log("MCP server running on port 3000");
});

Advanced: User Actions (Handshakes)

Need your tool to execute transactions or get user signatures? Use the Handshake Architecture:
import { createSignatureRequest, wrapHandshakeResponse } from "@ctxprotocol/sdk";

// In your tool handler
if (needsUserSignature) {
  return wrapHandshakeResponse(
    createSignatureRequest({
      domain: { name: "MyProtocol", version: "1", chainId: 1 },
      types: { Order: [{ name: "amount", type: "uint256" }] },
      primaryType: "Order",
      message: { amount: 1000 },
      meta: { description: "Place order", protocol: "MyProtocol" },
    })
  );
}
The Context app will show an approval card, the user signs, and the signature is returned to your tool.

Handshake Architecture Guide

Full guide: signatures, transactions, OAuth flows

Example Servers

Check out these complete working examples:

TypeScript (Express + MCP SDK)

Python (FastMCP + ctxprotocol)

The Python example uses FastMCP which auto-generates outputSchema from Pydantic models and includes structuredContent in responses automatically.