You know how to use and build MCPWhat is mcp?Model Context Protocol - a standard that lets AI tools connect to external services like databases, issue trackers, or APIs. servers. Now let's talk about when to use MCP, how to combine servers effectively, and the patterns that separate a good MCP setup from a messy one.
Decision framework: MCPWhat is mcp?Model Context Protocol - a standard that lets AI tools connect to external services like databases, issue trackers, or APIs. vs alternatives
Not everything needs MCP. Here is a decision framework:
| Approach | Best for | Example |
|---|---|---|
| MCP server | Reusable AI-to-system integrations that multiple tools or conversations need | A Postgres server your whole team uses with Claude |
| Direct API call | One-off integrations in your own code where you control the flow | A script that calls the GitHub API to create a release |
| Function calling | Tightly scoped tools defined per API request | A chatbot with 3 specific actions (search, order, refund) |
Use MCP when:
- Multiple AI applications or conversations will use the same integration
- You want the AI to discover and use tools dynamically
- You are building for a team, not just yourself
- The integration is with an external system (database, APIWhat is api?A set of rules that lets one program talk to another, usually over the internet, by sending requests and getting responses., file system)
Use direct API calls when:
- You are writing a script or application that calls an API directly
- You need precise control over request/response handling
- The integration is a one-off task
- You do not need AI involvement, you are just automating
Use function calling when:
- You are building a chatbot or AI feature with a fixed set of actions
- Tools are defined at build time, not discovered at runtimeWhat is runtime?The environment that runs your code after it's written. Some languages need a runtime installed on the machine; others (like Go) bake it into the binary.
- You want maximum control over what the AI can do
Composing multiple servers
The real power of MCPWhat is mcp?Model Context Protocol - a standard that lets AI tools connect to external services like databases, issue trackers, or APIs. shows when you combine multiple servers. Here is a developer workflow that connects three systems:
{
"mcpServers": {
"project-files": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects/app"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" }
},
"database": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": { "DATABASE_URL": "${DEV_DATABASE_URL}" }
}
}
}With this setup, you can have a conversation like:
Claude will use all three servers, GitHub for issues, Postgres for the schemaWhat is schema?A formal definition of the structure your data must follow - which fields exist, what types they have, and which are required., filesystem for the code, to give you a comprehensive answer. Without MCP, you would need to copy-paste from three different tools.
Composition best practices
- Name servers by function, not technology: Use
project-filesinstead offilesystem-server,databaseinstead ofpostgres. This helps the AI understand what each server is for. - Keep servers focused: Each server should do one thing well. Do not try to make a single server that handles files, database, and GitHub.
- Avoid overlapping capabilities: If two servers can both read files, the AI might get confused about which to use. Be explicit about which server handles what.
Security patterns
MCPWhat is mcp?Model Context Protocol - a standard that lets AI tools connect to external services like databases, issue trackers, or APIs. servers run with your permissions. A misconfigured server can expose sensitive data, delete files, or leak credentials.
Pattern 1: Environment variableWhat is environment variable?A value stored outside your code that configures behavior per deployment, commonly used for secrets like API keys and database URLs. scoping
Never hardcode secrets in your configuration file:
// BAD - anyone who sees this file has your token
"env": { "GITHUB_TOKEN": "ghp_abc123def456" }
// GOOD - references a variable set in your shell profile
"env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" }Set the actual values in ~/.zshrc or ~/.bashrc, or use a secrets manager.
Pattern 2: Read-only vs read-write access
Ask yourself: does the AI really need write access?
- Database: Connect with a read-only user unless the AI explicitly needs to write data
- Filesystem: ScopeWhat is scope?The area of your code where a variable is accessible; variables declared inside a function or block are invisible outside it. to the smallest directory needed, and consider read-only mode if available
- GitHub: Use a tokenWhat is token?The smallest unit of text an LLM processes - roughly three-quarters of a word. API pricing is based on how many tokens you use. with minimal scopes (e.g.,
repo:readinstead of fullrepoaccess)
Most AI interactions are read-heavy. Default to read-only and add write access only when required.
Pattern 3: Separate dev and prod
Never connect your AI tools to production databases or production APIs during development:
// Development config
"env": { "DATABASE_URL": "postgres://readonly@localhost:5432/myapp_dev" }
// NEVER this for development
"env": { "DATABASE_URL": "postgres://admin@prod-db.example.com/production" }Use development databases, staging APIs, and test environments. Production access should be rare and deliberate.
Pattern 4: Sandboxing
If possible, run MCP servers in a sandboxed environment:
- Use DockerWhat is docker?A tool that packages your application and all its dependencies into a portable container that runs identically on any machine. containers to isolate server processes
- Restrict network access to only the necessary endpoints
- Run servers with the minimum required file system permissions
Error handling patterns
Good error handling makes the difference between an AI that recovers gracefully and one that gets stuck.
Return meaningful error messages
// BAD - the AI cannot help the user with this
return {
isError: true,
content: [{ type: "text", text: "Error" }]
};
// GOOD - the AI can explain the problem and suggest a fix
return {
isError: true,
content: [{
type: "text",
text: `Cannot connect to database: connection refused at localhost:5432. The database server may not be running. Try: "brew services start postgresql" or check if the DATABASE_URL environment variable is correct.`
}]
};The AI reads error messages and relays them to the user. A detailed error message lets the AI provide actionable help.
Validate inputs before expensive operations
server.tool("run_query", "...", { sql: { type: "string" } },
async ({ sql }) => {
// Validate before executing
if (sql.toLowerCase().includes("drop") || sql.toLowerCase().includes("delete")) {
return {
isError: true,
content: [{
type: "text",
text: "This server only allows read operations. DROP and DELETE statements are blocked."
}]
};
}
// Safe to proceed
const result = await db.query(sql);
return { content: [{ type: "text", text: JSON.stringify(result) }] };
}
);Implement timeouts
External APIs can hang. Always set timeouts:
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10000); // 10 seconds
try {
const response = await fetch(url, { signal: controller.signal });
clearTimeout(timeout);
// process response
} catch (error) {
if (error.name === 'AbortError') {
return {
isError: true,
content: [{ type: "text", text: "Request timed out after 10 seconds." }]
};
}
throw error;
}Tool description best practices
The tool description is the single most important thing you write. The AI uses it to decide when to call your tool.
Be specific about when to use the tool
// BAD
"Manages users"
// GOOD
"Look up a user by their email address. Use this when the user asks
to find someone, check if an account exists, or get user details.
Returns the user's name, email, role, and creation date. Returns an
error if no user is found with that email."Specify what the tool returns
The AI needs to know what data it will get back to decide if this tool answers the user's question.
Include edge cases
"Search for files matching a glob pattern. Returns file paths and sizes.
If no files match, returns an empty array (not an error). Maximum 100
results - use a more specific pattern if you need to narrow results."Use natural language
Descriptions are read by an AI, not parsed by a machine. Write them the way you would explain the tool to a colleague.
Anti-patterns to avoid
Too many tools
If your server exposes 50 tools, the AI has to read 50 descriptions to decide which one to use. This slows down responses and increases the chance of the AI picking the wrong tool.
Fix: Group related operations into fewer, more flexible tools. Instead of get_user_by_id, get_user_by_email, get_user_by_name, create one find_user tool that accepts different search criteria.
Vague descriptions
// The AI will never use this correctly
"Does stuff with data"If the AI cannot tell when to use your tool, it will either never use it or use it at the wrong time.
Missing error handling
A tool that throws an unhandled exception crashes the server and kills the connection. Always wrap tool handlers in try/catch and return structured errors.
Exposing sensitive operations without safeguards
A delete_all_records tool with no confirmation step is a disaster waiting to happen. Add guardrails: require specific confirmation strings, implement dry-run modes, or simply do not expose destructive operations.