If you're experimenting with AI agents and looking to move beyond simple LLM prompts, this guide is for you. In this session, Dino, Staff Engineer at Hygraph and the lead behind our AI Labs initiative, walks you through how to build a working TypeScript MCP client from scratch.
We’re going hands-on. No fluff, just real code. By the end, you’ll understand how to connect an AI model with external tools and APIs using the Model Context Protocol (MCP) and why that matters for building powerful AI-native applications.
Editor's Note
#What we’re building
You’ll create a basic MCP client that allows an AI agent to:
- Receive a user message
- Dynamically query an MCP server
- Use an external tool like a weather API
- Return a structured, intelligent response
All using TypeScript, Node.js, and a simple frontend.
#What is MCP and why should you care?
MCP (Model Context Protocol) is a standard that lets LLMs (called MCP clients) discover and use external tools dynamically. Instead of hardcoding a fixed set of functions, you expose a list of tools at runtime, and the AI knows what it can call based on the server’s capabilities.
MCP unlocks:
- Dynamic tool discovery
- Standardized schemas
- Plug-and-play agent extensions
In short, it gives your AI superpowers without relying on brittle prompt injection or custom wrappers.
#Step 1: Build the MCP server
Start by creating a small server that exposes two tools:
- getWeather (calls a weather API)
- searchWeb (mocked to show error handling)
Here’s what you need:
- Define a /tools/list endpoint that returns JSON schema for each tool
- Implement a /tools/call endpoint that parses the AI’s request and performs the right action
Dino used the OpenMeteo API for weather data and wrote basic logic to transform locations into coordinates, query the forecast, and return readable results.
Each tool’s schema should clearly describe:
#Step 2: Set up the client in TypeScript
Next, you'll build the MCP client, which is technically your Node.js backend. It:
- Accepts messages from the user via HTTP
- Sends messages to the AI model (in this case, AWS Bedrock running Claude)
- Parses responses and checks for tool calls
- If needed, calls the MCP server and sends results back to the AI for final output
The conversation loop looks like this:
- User says “What’s the weather in Berlin?”
- AI responds: “I’ll use the weather tool”
- AI emits tool call: getWeather({ location: "Berlin" })
- Node.js client invokes the MCP server and sends results back to the AI
- AI finalizes the response and replies to the user
Important: Dino used stdin/stdout as the transport protocol between client and MCP server for simplicity. MCP also supports HTTP if you're going the API route.
#Step 3: Connect it all
Dino wired this all up in a lightweight Express app:
- /api/conversations manages in-memory chats
- /api/message triggers the AI+MCP pipeline
- Frontend is React, but very minimal
He also included logic to:
- Translate MCP tool schemas to match the AI provider’s expectations
- Handle multiple tools in a single conversation
- Filter/structure messages based on roles (user, assistant, tool)
⚠️ Tip: Always manage the full conversation server-side to prevent prompt injection attacks. Don’t expose the full message history to the frontend.
#Step 4: Extend with Hygraph MCP
At the end of the livestream, Dino showed how easy it is to plug in Hygraph’s own MCP server. With one line added to your config, your AI agent can start accessing content models, running queries, and manipulating schema inside a real CMS.
If you’re building anything that touches content or structured data, this is where things get really exciting.
#Why this matters
LLMs are only as smart as the context and tools you give them. By integrating with MCP, your AI agents gain real-world awareness, decision-making power, and dynamic capabilities that go far beyond chat.
Whether you’re building internal automations, dev tools, or next-gen CMS experiences, the TypeScript MCP client approach gives you full control and flexibility.
→Want to see it in action? Watch the livestream with Dino to follow along as he builds and debugs live.
→Curious about AI-native content management?
Check out hygraph.ai to explore our vision for building with AI and structured content at the core.
Blog Author