Building controlled context with Markdown reader MCP

Published on by

I was deep in a coding session last week when it hit me: my AI assistant had no clue about the deployment process I'd carefully documented two months ago. Or the coding standards my team had hammered out. Or the architectural decisions that explained why half the codebase looked the way it did.

There I was, copying and pasting the same explanations again.

Sure, I'd been getting better at updating README.md files and CLAUDE.md in each project - something I've been learning to do religiously over the past few months. But then you're stuck in this endless cycle of copying central guidance across repositories, watching it slowly drift out of sync as projects evolve at different speeds. Each repository becomes its own little island of context, and your AI assistant? It starts every conversation like it's meeting your codebase for the first time.

What if there was a way to give AI assistants controlled access to exactly the knowledge they need, when they need it? The Model Context Protocol (MCP) from Anthropic makes this possible, and honestly, it seemed like the perfect excuse to learn both Go and how MCP servers actually work. The result: markdown-reader-mcp - a simple playground project that does exactly what its name suggests.

The problem I keep running into

I sometimes find myself explaining the same architectural decision three times in a single day. You know this feeling if you've ever worked with AI on anything more complex than a script. You've documented everything meticulously, but your AI assistant starts each conversation like it's never seen your project before.

The traditional approaches I've tried fall short in predictable ways. Upload files one by one? Tedious, and you'll inevitably forget something crucial. Hope the AI can infer context from conversation history? That works until it doesn't, and suddenly you're spending half your time providing background instead of solving the actual problem.

A deliberately simple solution

The markdown-reader MCP server does exactly two things - no more, no less:

  • Find markdown files across your configured directories with optional filtering
  • Read markdown files by name, regardless of where they live in your setup

Why focus exclusively on markdown files? Because that's where the valuable context actually lives. Your README files, architecture decision records, deployment guides, and those careful notes you wrote after the last incident - they're all in markdown. The code itself rarely tells the full story.

The server is intentionally simple and safe. It automatically skips directories like .git and node_modules because, let's be honest, that's not where your documentation lives anyway. It's read-only by design, so your AI assistant can learn from your knowledge without ever changing it.

There are a few filesystem and Markdown MCP servers that have sprung up recently, but they're doing far more than I want and perhaps just one configuration change away from being more permissive than I'm comfortable with. I wanted something that erred on the side of caution.

The configuration is centralized in a single file that I can manage through my personal dotfiles - this lets me wire up exactly which markdown directories I want it to access, nothing more. It also supports SSE transport, which means I can call it from within a container or from another machine entirely.

No complex indexing. No fancy search algorithms. Just straightforward file access that works consistently.

What this actually looks like

I keep a file called tone-of-voice.md that captures how I want to write. Instead of explaining my writing style every time I ask an AI to help with content, it can just read that file. Same principle applies to coding-standards.md for development work, or deployment-runbook.md when I need help with infrastructure.

With this approach, my AI assistant can discover and reference:

  • Project documentation that explains your architecture and the decisions behind it
  • Personal prompts you've refined for code reviews, writing, or project planning
  • Team guidelines that keep everyone aligned on standards and practices
  • Learning notes from courses, conferences, or those hard-won lessons from production incidents

The result? Conversations that start with context instead of lengthy explanations.

Learning while building

This project was as much about learning Go and MCP architecture as it was about solving the context problem. I'd been curious about both for months, and sometimes the best way to understand something is to build with it.

The markdown-reader MCP is deliberately simple - partly by design, partly because I was figuring things out as I went. It does one thing reasonably well rather than trying to solve every possible context problem. Sometimes "I don't know how to build the complex version yet" leads to better design decisions anyway.

The bigger picture

The basic idea is straightforward: instead of treating AI as a blank slate that needs constant feeding, give it structured access to knowledge you've already captured. Whether this actually improves productivity in practice is still an open question - I've found it useful enough in my own workflow to keep around, but your mileage may vary.

This tool is really just a small experiment in a much larger space: AI interactions that build on what we already know instead of starting from scratch each time. The landscape here is evolving quickly, and I suspect we'll see much more sophisticated approaches emerge.

If you want to experiment with this idea, the markdown-reader MCP is available on GitHub. Just don't expect it to transform your entire workflow - it's more of a useful little utility that scratches a specific itch. Sometimes that's exactly what you need.