WTF is MCP?

Exploring the missing link between LLMs and the real world, and why building your own MCP server is a cheat code for AI development.

5 min read

๐Ÿ‘€

TL;DR LLMs are smart but helpless. MCP (Model Context Protocol) teaches them how to actually do things โ€” like send invoices, update databases, or book flights โ€” by giving them a clean, standardized way to call real-world tools. In this post, Iโ€™ll explain WTF MCP is, why it matters, and how building your own MCP server changes the game.


Text Alone Won't Cut It

Large Language Models (LLMs) are great at predicting text.

They can finish your sentences, summarize a Wikipedia page, or argue about pineapple on pizza. But if you ask them to send an email, update a database, or charge a credit card?

Sorry, I'm just a text generator.

That's the limit. โ˜น๏ธ

On their own, LLMs are brains without hands โ€” no way to interact with the real world.

And that's where MCP steps in.


๐Ÿ› ๏ธ What is MCP?

MCP is a standardized layer that connects LLMs to real-world tools and services.

MCP hands your AI a passport, a map, and a megaphone โ€” so it can stop mumbling about ideas and actually ship something in the messy real world.

Your AI's useless TED talk era is over.

Instead of making the AI learn a different "language" for every API or tool, MCP defines one shared way to talk, listen, and act.

โœ”๏ธ Clean โœ”๏ธ Modular โœ”๏ธ Scalable

If you've ever dreamed of building your own Jarvis (minus the homicidal Ultron detour)... MCP is the roadmap.


๐Ÿ“œ The Evolution From Text to Tools

Let's time-travel real quick:

  1. Stage 1: LLMs Predict Text

    • Smart, but passive. No actions.
  2. Stage 2: LLMs Call Tools (Chaos Edition)

    • Ad hoc integrations. Every tool speaks a different API dialect. Total engineering nightmare.
  3. Stage 3: MCP Standardizes the Chaos

    • One protocol to rule them all. LLMs now act predictably, safely, and powerfully.

๐Ÿ—๏ธ How MCP Works

An MCP ecosystem has four parts:

  • Client: User app (e.g., Chat UI, CLI, Agent Interface)
  • Protocol: The shared "language" of tools
  • MCP Server: Middleman that validates and routes tool calls
  • Service: The actual database, API, or action handler

Flow:

User โ†’ LLM โ†’ MCP Server โ†’ Tool/Service โ†’ Response โ†’ User

It's clean, pluggable, and future-proof.

Without MCP, connecting 10 tools to one LLM is like building 10 custom bridges. With MCP, it's one highway everyone shares.


๐Ÿงช Why Building Your Own MCP Server Matters

Instead of duct-taping 20 APIs directly into your LLM prompts, you can:

  • ๐Ÿ› ๏ธ Define clear tools (updateVAT, sendInvoice, mergeQuotes)
  • ๐Ÿ“ก Expose them cleanly to your agents
  • ๐Ÿ›ก๏ธ Control validation and permissions
  • ๐Ÿš€ Add new capabilities without retraining the model

In my next posts, I'll walk you through exactly how I'm building an MCP server โ€” from defining tools to wiring up agents.

Because AI that talks is fun.

But AI that acts? That's the future.

Because nobody needs another AI telling you how to feel about pineapple pizza.


๐Ÿค” Why You Should Care

Even if you're not a dev, MCP unlocks:

  • Better AI apps โ€” assistants that book flights, generate invoices, send reports
  • Safer AI behavior โ€” tightly controlled, validated actions
  • Faster innovation โ€” plug-and-play tool libraries, not one-off hacks

When standards like MCP mature, building apps with AI will be like stacking Lego blocks โ€” not coding from scratch every time.

The future isn't smarter LLMs. It's more capable ones โ€” and MCP is how we get there.


The Next Big Shift in MCP ๐Ÿ‘€

Until recently, MCP mainly relied on Server-Sent Events (SSE) to stream tool responses back to your agent.

It worked โ€” but it came with baggage:

  • ๐Ÿ“ก Always-on connections
  • ๐Ÿงน Long-lived server states
  • ๐Ÿ’ธ Burned through hosting credits on platforms like Cloudflare Workers

We ate all our free compute credits in one day.

โ€” Real Dev Battle Story


Turns out, streaming endless ok messages isn't free. Who knew?

The fix? Streamable HTTP โ€” officially added to the MCP spec.

โœ… It lets MCP servers behave like classic REST APIs (simple POST/GET).
โœ… It enables stateless, ephemeral function calls.
โœ… You still can use SSE for truly real-time needs โ€” it's optional.

This makes hosting an MCP server as easy as spinning up an Express app.
No funky websocket timeouts, no compute credit drain.

Itโ€™s a big step toward making MCP servers easier, cheaper, and more scalable.


Takeaway:
If youโ€™re thinking of building an MCP server in 2025 and beyond, build for Streamable HTTP from day one.


๐Ÿ”ฎ Coming Next

  • ๐Ÿš€ How I Designed My First MCP Server
  • ๐Ÿ› ๏ธ Defining Tools That Don't Break Your Brain
  • โšก Connecting Supabase, Vercel, and GPT to My MCP Server

Want to Dive Deeper?

If you want a solid foundational read on what MCP really is and where it's headed, the official Anthropic blog post on Model Context Protocol is a must-read.

They break down:

  • Why tool usage matters for LLMs
  • How MCP formalizes the model โ†’ tool interaction
  • Where the future of multi-agent systems and AI orchestration is headed

Reading it gave me even more conviction that building your own MCP server is a cheat code for scalable, modular AI.

Go read it โ€” then come back here as we build it together.


๐Ÿ”— More Links for the Curious:


Made with โ˜• and too many tabs open