Model Context Protocol (MCP) User Guide
Model Context Protocol (MCP) gives your AI agent access to the real world. Instead of just writing code in isolation, the agent can connect to databases, browse the web, and manage your tools.
This guide will show you exactly how to use MCP in OutcomeDev.
1. Accessing MCP Servers
To connect an agent to a tool, you need to configure an MCP Server.
- Navigate to the Task Form on the homepage.
- Look for the Cable/Plug Icon button next to the prompt input.
- Click it to open the MCP Servers Dialog.
2. Connecting a Server
You can choose from pre-configured servers or add your own.
Using a Preset (Recommended)
We have built-in support for popular tools like Context7, Browserbase, Supabase, and Linear.
- In the dialog, click on a preset (e.g., Context7).
- Configure:
- For Remote Servers (like Context7), just click Connect.
- For Local Servers (like Browserbase), you may need to provide API keys (e.g.,
BROWSERBASE_API_KEY).
- Click Save. The server status will change to Connected.
Adding a Custom Server
If you have your own MCP server (or one from the community):
- Click the + Add Custom button.
- Server Type (See detailed breakdown below):
- Remote (SSE): Use this for servers hosted on a URL.
- Local (Stdio): Use this for servers running on your machine.
- Details: Enter the Name and the URL/Command.
- Environment Variables: Add any API keys required by the server.
Understanding Server Types: Local vs. Remote
To use MCP effectively, you need to understand the connection types:
1. Local (Stdio) MCPs
- What it is: The MCP server process runs directly inside the secure sandbox alongside the AI agent. They communicate via standard input/output streams (stdio).
- When to use: Use this for tools that need direct access to the filesystem, local databases, or run as Node/Python scripts. Examples include the official
@modelcontextprotocol/server-postgresor@modelcontextprotocol/server-filesystem. - Configuration: You provide a Command (e.g.,
npx -y @modelcontextprotocol/server-postgres). The agent executes this command when the task starts. Ensure required environment variables (likeDATABASE_URL) are provided in the configuration. - Pros: Highly secure (no exposed ports required). Perfect for local script execution.
- Cons: Consumes your sandbox compute and memory overhead.
2. Remote (SSE) MCPs
- What it is: The MCP server runs on an external server accessible over the internet. The AI Agent connects to it using Server-Sent Events (SSE) over HTTP.
- When to use: Use this for centralized SaaS tools, third-party APIs (like Context7 or Browserbase), or when you want to host an MCP server on your own infrastructure (like Vercel or Render) to save local compute.
- Configuration: You provide a URL (e.g.,
https://my-mcp.com/sse). - Pros: Offloads compute. Ideal for web-based services.
- Cons: Requires external network requests and proper authentication handling (OAuth or API Keys).
3. Using the Tool in a Task
Once a server is connected, the agent is aware of it. You just need to ask.
Example: Research & Build
Scenario: You want to build a feature using a library you don't know well.
- Connect: Ensure Context7 (Web Search) is connected.
- Prompt:
"Research the new features in Next.js 15 using Context7. Then, build a small demo page showcasing the
usehook." - Result:
- The agent will first call the
context7.searchtool to read the documentation. - It will then use that knowledge to write the correct code in your project.
- The agent will first call the
Example: Database Migration
Scenario: You want to write a migration based on your current schema.
- Connect: Add a Postgres MCP server connected to your dev database.
- Prompt:
"Inspect the
userstable schema. Create a migration to add asubscription_statuscolumn." - Result:
- The agent calls
postgres.describe_tableto see the current structure. - It writes a perfectly aligned migration file.
- The agent calls
Best Practices
1. Don't Overload the Context
Only connect the servers you need for the specific task.
- Bad: Connecting Linear, GitHub, Postgres, and Notion for a simple CSS fix.
- Good: Connecting only Linear to read the bug report.
- Why: Every connected tool adds to the system prompt, consuming tokens and potentially distracting the model.
2. Be Specific
If you have multiple tools that do similar things (e.g., two different search tools), tell the agent which one to use.
- "Use Context7 to find the docs..."
- "Use Browserbase to scrape the pricing page..."
3. Progressive Disclosure
For complex workflows, start small.
- Step 1: "List all tables in the database."
- Step 2: "Read the schema for the
orderstable." - Step 3: "Write a query to aggregate monthly sales."
This saves tokens compared to dumping the entire database schema into the chat at once.
Troubleshooting
- Server Error: If a local server fails, check the Logs tab in the task view. It often indicates missing environment variables (API keys).
- Agent Ignoring Tool: Explicitly mention the tool name in your prompt (e.g., "Use the Figma tool to...").
- Authentication: For remote servers, ensure you have completed any required OAuth flows.