Common Errors
{"error":"Unauthorized"}
This is the most common error for new tool builders.
Why This Happens
| Cause | Explanation |
|---|---|
Missing JWT on tools/call | The middleware requires a JWT from the Context Platform for execution methods. |
| HTTP instead of HTTPS | Context Platform only connects to HTTPS endpoints. HTTP will silently fail. |
| Not registered on marketplace | Until you register at ctxprotocol.com/contribute, the platform won’t send requests to your server. |
| Wrong endpoint URL | The URL you registered doesn’t match your deployed server. |
What You CAN Test Locally (No Auth Required)
These MCP methods work without authentication:Testing tools/call Locally
The tools/call method requires a valid JWT from the Context Platform. Options for testing:
- Test tool logic directly — Write test files that call your tool handler functions directly, bypassing the MCP transport
- Temporarily bypass middleware — Comment out
verifyContextAuthduring development: - Test on deployed server — SSH into your server and test against localhost after deployment
End-to-End Testing
For full end-to-end testing through the Context Platform:- Deploy to HTTPS — Use Railway, Vercel, or set up Caddy/nginx
- Register on marketplace — Go to ctxprotocol.com/contribute
- Test through the Context app — Ask the agent to use your tool
Tool Not Discovered
Your server is deployed but Context can’t find your tools.Checklist
- Health endpoint returns 200:
curl https://your-server.com/health -
initializeworks: Test with curl (see above) -
tools/listreturns your tools: Test with curl after initialize - URL is HTTPS (not HTTP)
- URL ends with
/mcp(e.g.,https://your-server.com/mcp) - Tools have
outputSchemadefined (required by Context)
New/Updated Tools Not Appearing
You deployed new endpoints but they’re not showing up in the marketplace. This is the most common oversight after updating your MCP server.Solution
- Go to ctxprotocol.com/developer/tools → Developer Tools (My Tools)
- Find your tool and click “Refresh Skills”
- Context will re-call
listTools()to discover your changes
Also Consider
- Update your description — If you added significant new functionality, use the MCP Server Analysis Prompt to generate an updated description
- Verify deployment — Make sure your new code is actually deployed (check health endpoint, test
tools/listvia curl)
Server Won’t Start
Node.js Version
Missing Dependencies
TypeScript Errors
Module System Mismatch
Ensure yourpackage.json has:
Railway Deployment Fails
- Check Railway logs for specific errors
- Ensure
package.jsonhas"type": "module" - Set start command to:
pnpm startornpm start - Verify
tsconfig.jsonhas correct module settings
Response Schema Validation Fails
If your tool responses don’t match youroutputSchema, users can dispute them.
Common Causes
Solution
- Ensure all
structuredContentfields match youroutputSchematypes exactly - Use TypeScript to catch type mismatches at compile time
- Test your responses against the schema before deploying
MCP Security Model Reference
Understanding which methods require authentication:| MCP Method | Auth Required | Why |
|---|---|---|
initialize | ❌ No | Session setup |
tools/list | ❌ No | Discovery — agents need to see your schemas |
resources/list | ❌ No | Discovery |
prompts/list | ❌ No | Discovery |
tools/call | ✅ Yes | Execution — costs money, runs your code |
Discovery methods are intentionally open so AI agents can find your tools. Only execution (
tools/call) requires payment verification through the Context Platform JWT.Using Developer Mode for Debugging
When your tool is registered on the marketplace but not returning expected results, Developer Mode provides detailed execution logs to help diagnose issues.Enabling Developer Mode
- Go to Settings in the Context app
- Scroll to Developer Settings
- Enable Developer Mode
What Developer Mode Shows
When enabled, a Developer Logs card appears at the bottom of AI responses. Click to expand and see:- Initial Code: The TypeScript code the AI generated to call your tool
- Execution Trace: All attempts, including errors and retries
- Final Code: The code after any self-healing fixes (if different from initial)
- Tool Call History: Every call made to your tool with arguments and results
- Final Execution Result: The data or error returned
Copying Logs for Debugging
Click “Copy All” to copy the complete debug log. You can then:- Paste the logs into an AI coding assistant (Claude, GPT-4, etc.)
- Ask it to analyze why your MCP server isn’t returning expected results
- The AI can suggest specific fixes based on the execution trace
Common Issues Found via Developer Logs
Input Schema Problems
Input Schema Problems
Symptom: Wrong or missing arguments in tool callsCheck: Look at the “Tool Call History” section to see what arguments were passedFix: Ensure your
inputSchema has:- Clear
descriptionfields for each parameter defaultorexamplesvalues for better AI understanding- Correct
typedefinitions (string, number, boolean, etc.)
Output Schema Mismatches
Output Schema Mismatches
Symptom: “Suspicious null values” in execution trace, data completeness checks failingCheck: Compare your
outputSchema with the actual result in “Final Execution Result”Fix: Your structuredContent must exactly match your declared outputSchema:Missing structuredContent
Missing structuredContent
Symptom: AI can’t parse your response, retries multiple timesCheck: Look at the raw result in “Tool Call History” — is it structured data or just text?Fix: Always return
structuredContent with your tool results:Poor Tool Descriptions
Poor Tool Descriptions
Symptom: AI picks the wrong tool or passes incorrect argumentsCheck: Review the initial code — is the AI using your tool correctly?Fix: Write clear, specific tool descriptions:
Error Handling Issues
Error Handling Issues
Symptom: Generic errors in execution trace, no useful error messagesCheck: Look at the “error” field in failed attemptsFix: Return meaningful errors that help diagnose the issue:
Self-Healing and Retries
The Context agent has a self-healing mechanism that automatically retries when:- Runtime errors: Code crashes → AI generates fix → retry
- Suspicious nulls: Code runs but returns null where data should exist → AI reflects → retry
- Incomplete data: Results don’t fully answer the question → AI fetches more → retry
- Your tool returned unexpected data format
- The AI had to adjust how it processes your response
- There may be schema or description improvements you can make

