AI coding assistants are everywhere now, but there's a critical gap: how do they access your actual development context? Your database schema, API routes, recent commits, production errors—all the things you need to write real code. This is the problem Model Context Protocol (MCP) was designed to solve.
Model Context Protocol is an open standard introduced by Anthropic that enables AI assistants to securely connect with external data sources and tools. According to Anthropic's announcement, "Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol."
The protocol has gained significant traction in 2025. According to Wikipedia's MCP entry, "In March 2025, OpenAI officially adopted the MCP, following a decision to integrate the standard across its products, including the ChatGPT desktop app."
Before MCP, each AI tool had to build custom integrations for every service. Want GitHub access? Custom integration. Need database queries? Another custom integration. MCP changes this by providing a universal protocol that works across AI assistants and development tools.
MCP follows a client-server model:
Key benefits:
This portfolio uses several MCP servers in the development workflow. Here's the actual configuration from my .github/copilot-instructions.md:
### Core MCPs
- **Memory** - Maintains project context across conversations
- **Sequential Thinking** - Complex problem-solving and planning
- **Context7** - Documentation lookup for libraries
### Integration MCPs
- **Sentry** - Production error monitoring
- **Vercel** - Deployment management and build logs
- **GitHub** - Repository operations and PR management1. Production Debugging with Sentry MCP
Instead of copying error traces into chat, the AI can directly query production errors:
// AI can analyze real production errors
const issues = await mcp_sentry_search_issues({
naturalLanguageQuery: "database errors from last hour"
});
// Then get detailed stack traces
const details = await mcp_sentry_get_issue_details({
issueId: "PROJECT-123"
});This means debugging sessions start with actual production context, not guesswork.
2. Documentation Lookup with Context7
When working with a library, the AI can fetch current docs:
// Get up-to-date Next.js documentation
const docs = await mcp_context7_get_library_docs({
context7CompatibleLibraryID: "/vercel/next.js",
topic: "server actions"
});No more "trained on data from 2023" problems. The AI has access to current documentation.
3. Deployment Management with Vercel MCP
The AI can check build status and logs:
// Check deployment status
const deployment = await mcp_vercel_get_deployment({
idOrUrl: "my-app-abc123.vercel.app",
teamId: "team_xyz"
});
// Get build logs if something fails
const logs = await mcp_vercel_get_deployment_build_logs({
idOrUrl: "my-app-abc123.vercel.app",
teamId: "team_xyz"
});MCP servers are surprisingly simple. Here's a minimal example:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server(
{
name: "my-custom-server",
version: "1.0.0"
After using MCP extensively on this portfolio, here are some patterns that work well:
Use the Memory MCP to store project-specific knowledge:
// Store architecture decisions
await mcp_memory_create_entities({
entities: [
{
name: "Blog System",
entityType: "Architecture",
observations: [
"Uses MDX with next-mdx-remote",
"Syntax highlighting via Shiki",
"View counts stored in Redis"
]
}
]
});This means the AI remembers your project's patterns without you repeating them.
For architectural decisions or debugging, use Sequential Thinking to break down problems:
// AI uses this internally for multi-step reasoning
await mcp_thinking_sequentialthinking({
thought: "First, analyze the current rate limiting implementation...",
thoughtNumber: 1,
totalThoughts: 5,
nextThoughtNeeded: true
});This gives better results than asking for immediate solutions.
Never upload credentials or sensitive files. MCP filesystem servers let AI read your local files without sending them anywhere:
// AI reads local files securely
const config = await mcp_filesystem_read_file({
path: "./.env.local"
});The file contents stay on your machine.
Here's what a typical workflow looks like:
Before MCP:
With MCP:
The difference is dramatic. The AI works with your actual project state, not a text summary you typed.
MCP is designed with security in mind, but you still need to be careful:
From the MCP security guidelines:
"MCP servers run with the same permissions as your user account. Only install servers from trusted sources."
To start using MCP with GitHub Copilot in VS Code:
1. Install MCP servers:
# Core servers
npm install -g @modelcontextprotocol/server-memory
npm install -g @modelcontextprotocol/server-filesystem
npm install -g @upstash/context7-mcp
# Integration servers (HTTP-based)
# Configure in VS Code settings2. Configure VS Code settings:
Add to .vscode/settings.json or user settings:
{
"github.copilot.chat.mcp.enabled": true,
"github.copilot.chat.mcp.servers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp@latest"]
}
3. Start using MCP-aware prompts:
"Use the Sentry MCP to find production errors from the last 24 hours"
"Use Context7 to look up the latest Next.js App Router patterns"
"Use Memory to recall our authentication architecture"The protocol is still evolving, but the direction is clear:
Model Context Protocol represents a shift in how we think about AI-assisted development. Instead of AI as a chatbot that gives generic advice, it becomes a teammate with access to your actual development context.
The key insight: context is everything. An AI that can query your production errors, read your actual codebase, and fetch current documentation is infinitely more useful than one working from a text description.
If you're building AI tools or just trying to improve your development workflow, MCP is worth exploring. The protocol is open, the SDKs are straightforward, and the ecosystem is growing fast.
This post was written with assistance from GitHub Copilot using MCP servers for Memory, Sequential Thinking, and Context7 documentation lookup. The irony is not lost on me.