Skip to main content

Common Errors

{"error":"Unauthorized"}

This is the most common error for new tool builders.
The createContextMiddleware() from @ctxprotocol/sdk verifies that requests come from the Context Platform with a valid JWT. Without this JWT, any call to tools/call will return {"error":"Unauthorized"}.

Why This Happens

CauseExplanation
Missing JWT on tools/callThe middleware requires a JWT from the Context Platform for execution methods.
HTTP instead of HTTPSContext Platform only connects to HTTPS endpoints. HTTP will silently fail.
Not registered on marketplaceUntil you register at ctxprotocol.com/contribute, the platform won’t send requests to your server.
Wrong endpoint URLThe URL you registered doesn’t match your deployed server.

What You CAN Test Locally (No Auth Required)

These MCP methods work without authentication:
# Initialize session (no auth required)
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}},"id":1}'

# List tools (no auth required)
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "mcp-session-id: YOUR-SESSION-ID-FROM-INITIALIZE" \
  -d '{"jsonrpc":"2.0","method":"tools/list","id":2}'

Testing tools/call Locally

The tools/call method requires a valid JWT from the Context Platform. Options for testing:
  1. Test tool logic directly — Write test files that call your tool handler functions directly, bypassing the MCP transport
  2. Temporarily bypass middleware — Comment out verifyContextAuth during development:
    // Development: bypass auth for testing
    // app.post("/mcp", verifyContextAuth, async (req, res) => { ... });
    app.post("/mcp", async (req, res) => { ... });
    
  3. Test on deployed server — SSH into your server and test against localhost after deployment
Remember to re-enable the middleware before going live! Without it, anyone can call your tools for free.

End-to-End Testing

For full end-to-end testing through the Context Platform:
  1. Deploy to HTTPS — Use Railway, Vercel, or set up Caddy/nginx
  2. Register on marketplace — Go to ctxprotocol.com/contribute
  3. Test through the Context app — Ask the agent to use your tool

Tool Not Discovered

Your server is deployed but Context can’t find your tools.

Checklist

  • Health endpoint returns 200: curl https://your-server.com/health
  • initialize works: Test with curl (see above)
  • tools/list returns your tools: Test with curl after initialize
  • URL is HTTPS (not HTTP)
  • URL ends with /mcp (e.g., https://your-server.com/mcp)
  • Tools have outputSchema defined (required by Context)

New/Updated Tools Not Appearing

You deployed new endpoints but they’re not showing up in the marketplace. This is the most common oversight after updating your MCP server.

Solution

  1. Go to ctxprotocol.com/developer/toolsDeveloper Tools (My Tools)
  2. Find your tool and click “Refresh Skills”
  3. Context will re-call listTools() to discover your changes

Also Consider

  • Update your description — If you added significant new functionality, use the MCP Server Analysis Prompt to generate an updated description
  • Verify deployment — Make sure your new code is actually deployed (check health endpoint, test tools/list via curl)

Server Won’t Start

Node.js Version

node --version  # Must be 18+

Missing Dependencies

pnpm install  # or npm install

TypeScript Errors

pnpm exec tsc --noEmit  # Check for type errors

Module System Mismatch

Ensure your package.json has:
{
  "type": "module"
}

Railway Deployment Fails

  1. Check Railway logs for specific errors
  2. Ensure package.json has "type": "module"
  3. Set start command to: pnpm start or npm start
  4. Verify tsconfig.json has correct module settings

Response Schema Validation Fails

If your tool responses don’t match your outputSchema, users can dispute them.

Common Causes

// ❌ Schema says number, response is string
outputSchema: { value: { type: "number" } }
structuredContent: { value: "42" }  // String, not number!

// ✅ Correct: Types match
outputSchema: { value: { type: "number" } }
structuredContent: { value: 42 }  // Number

Solution

  • Ensure all structuredContent fields match your outputSchema types exactly
  • Use TypeScript to catch type mismatches at compile time
  • Test your responses against the schema before deploying

MCP Security Model Reference

Understanding which methods require authentication:
MCP MethodAuth RequiredWhy
initialize❌ NoSession setup
tools/list❌ NoDiscovery — agents need to see your schemas
resources/list❌ NoDiscovery
prompts/list❌ NoDiscovery
tools/callYesExecution — costs money, runs your code
Discovery methods are intentionally open so AI agents can find your tools. Only execution (tools/call) requires payment verification through the Context Platform JWT.

Using Developer Mode for Debugging

When your tool is registered on the marketplace but not returning expected results, Developer Mode provides detailed execution logs to help diagnose issues.

Enabling Developer Mode

  1. Go to Settings in the Context app
  2. Scroll to Developer Settings
  3. Enable Developer Mode

What Developer Mode Shows

When enabled, a Developer Logs card appears at the bottom of AI responses. Click to expand and see:
  • Initial Code: The TypeScript code the AI generated to call your tool
  • Execution Trace: All attempts, including errors and retries
  • Final Code: The code after any self-healing fixes (if different from initial)
  • Tool Call History: Every call made to your tool with arguments and results
  • Final Execution Result: The data or error returned

Copying Logs for Debugging

Click “Copy All” to copy the complete debug log. You can then:
  1. Paste the logs into an AI coding assistant (Claude, GPT-4, etc.)
  2. Ask it to analyze why your MCP server isn’t returning expected results
  3. The AI can suggest specific fixes based on the execution trace

Common Issues Found via Developer Logs

Symptom: Wrong or missing arguments in tool callsCheck: Look at the “Tool Call History” section to see what arguments were passedFix: Ensure your inputSchema has:
  • Clear description fields for each parameter
  • default or examples values for better AI understanding
  • Correct type definitions (string, number, boolean, etc.)
inputSchema: {
  type: "object",
  properties: {
    symbol: {
      type: "string",
      description: "Trading symbol (e.g., 'BTC', 'ETH')",
      examples: ["BTC", "ETH", "SOL"]
    },
    timeframe: {
      type: "string",
      description: "Time period for data",
      default: "24h",
      enum: ["1h", "24h", "7d", "30d"]
    }
  },
  required: ["symbol"]
}
Symptom: “Suspicious null values” in execution trace, data completeness checks failingCheck: Compare your outputSchema with the actual result in “Final Execution Result”Fix: Your structuredContent must exactly match your declared outputSchema:
// ❌ Schema/response mismatch
outputSchema: {
  price: { type: "number" },
  change: { type: "number" }
}
// Response returns: { price: "1234.56", change: null }

// ✅ Correct: Types and structure match
outputSchema: {
  price: { type: "number" },
  change: { type: "number", nullable: true }
}
// Response returns: { price: 1234.56, change: -2.5 }
Symptom: AI can’t parse your response, retries multiple timesCheck: Look at the raw result in “Tool Call History” — is it structured data or just text?Fix: Always return structuredContent with your tool results:
return {
  content: [{ type: "text", text: JSON.stringify(data) }],
  // Required for Context marketplace
  structuredContent: data,
  _meta: {
    outputSchema: yourOutputSchema
  }
};
Symptom: AI picks the wrong tool or passes incorrect argumentsCheck: Review the initial code — is the AI using your tool correctly?Fix: Write clear, specific tool descriptions:
// ❌ Vague description
description: "Gets market data"

// ✅ Specific description
description: "Fetches real-time cryptocurrency price, 24h volume, and price change for a given trading symbol. Returns data from top exchanges. Use this when the user asks about current prices, market cap, or trading volume."
Symptom: Generic errors in execution trace, no useful error messagesCheck: Look at the “error” field in failed attemptsFix: Return meaningful errors that help diagnose the issue:
// ❌ Generic error
throw new Error("Failed");

// ✅ Helpful error
return {
  content: [{ type: "text", text: "Error: Invalid symbol" }],
  structuredContent: {
    error: "INVALID_SYMBOL",
    message: "Symbol 'XYZ' is not supported. Valid symbols: BTC, ETH, SOL",
    validSymbols: ["BTC", "ETH", "SOL"]
  },
  isError: true
};

Self-Healing and Retries

The Context agent has a self-healing mechanism that automatically retries when:
  1. Runtime errors: Code crashes → AI generates fix → retry
  2. Suspicious nulls: Code runs but returns null where data should exist → AI reflects → retry
  3. Incomplete data: Results don’t fully answer the question → AI fetches more → retry
If you see multiple attempts in the execution trace, it usually means:
  • Your tool returned unexpected data format
  • The AI had to adjust how it processes your response
  • There may be schema or description improvements you can make
Pro Tip: Copy the Developer Logs, paste them into Claude or another AI assistant along with this troubleshooting page, and ask: “Based on these execution logs, why is my MCP tool not returning the expected results? What should I fix in my server?”

Still Stuck?