Frequently Asked Questions

Technical Implementation: Building AI Agents with TypeScript and MCP

What is MCP (Model Context Protocol) and why is it important for AI agents?

MCP (Model Context Protocol) is a standard that allows large language models (LLMs) to dynamically discover and use external tools at runtime. Instead of hardcoding functions, MCP exposes a list of tools the AI can call, enabling dynamic tool discovery, standardized schemas, and plug-and-play agent extensions. This approach gives AI agents greater flexibility and capability without relying on brittle prompt injection or custom wrappers. Read more.

What are the main steps to build an AI agent with TypeScript and MCP?

The main steps are: 1) Build the MCP server exposing tools like getWeather and searchWeb with endpoints for tool listing and invocation; 2) Set up the MCP client in TypeScript (Node.js backend) to handle user messages, communicate with the AI model, and process tool calls; 3) Connect everything in an Express app, managing conversations and integrating the frontend; 4) Optionally, extend with Hygraph's MCP server to access CMS content and schema. See the full guide.

How does the MCP client (Node.js backend) function in the AI agent setup?

The MCP client (Node.js backend) accepts user messages via HTTP, sends them to the AI model (e.g., AWS Bedrock running Claude), parses AI responses for tool calls, invokes the MCP server if needed, and returns results to the AI for final output. The conversation loop ensures dynamic, tool-augmented responses. Learn more.

What tools are demonstrated in the MCP server example?

The MCP server example demonstrates two tools: getWeather (calls a weather API using OpenMeteo) and searchWeb (mocked to show error handling). The server exposes endpoints for listing available tools and invoking them based on AI requests. See details.

How is the conversation loop structured between the user, AI, and MCP server?

The conversation loop is: 1) User sends a message (e.g., "What's the weather in Berlin?"); 2) AI decides to use a tool (e.g., weather tool); 3) AI emits a tool call; 4) Node.js client invokes the MCP server and returns results; 5) AI finalizes and sends the response to the user. See the workflow.

What frontend and backend technologies are used in the example project?

The example project uses TypeScript and Node.js for the backend (MCP client and server), Express for API routing, and a minimal React frontend for user interaction. Read more.

How can I extend my AI agent to access Hygraph content using MCP?

You can extend your AI agent by plugging in Hygraph's MCP server. With a single configuration change, your agent can access Hygraph content models, run queries, and manipulate schema within the CMS, enabling AI-native content management. See how.

What security considerations should I keep in mind when building AI agents with MCP?

Always manage the full conversation server-side to prevent prompt injection attacks. Do not expose the full message history to the frontend. Ensure endpoints are secured and validate all tool calls. Security tips.

Where can I watch a live coding session on building an AI agent with TypeScript and MCP?

You can watch the full livestream with Dino, Staff Engineer at Hygraph, as he builds and debugs an AI agent with TypeScript and MCP on YouTube.

What is the purpose of integrating AI agents with content management systems like Hygraph?

Integrating AI agents with CMS platforms like Hygraph enables dynamic, AI-powered content management, allowing agents to access, query, and manipulate structured content. This supports advanced automations, personalized experiences, and next-gen digital workflows. Learn more.

What is the recommended transport protocol between the MCP client and server?

For simplicity, stdin/stdout was used as the transport protocol between the MCP client and server in the example, but MCP also supports HTTP for API-based communication. Details here.

What are some best practices for managing tool schemas in MCP?

Each tool's schema should be clearly described and exposed via a /tools/list endpoint. Ensure schemas are standardized and compatible with the AI provider's expectations for seamless integration. See schema tips.

How can I handle multiple tools in a single AI conversation?

Implement logic in your backend to handle multiple tools within a conversation, filter and structure messages by role (user, assistant, tool), and translate tool schemas as needed for the AI provider. Read more.

What is the benefit of using TypeScript for building MCP clients?

TypeScript provides strong typing, better code maintainability, and improved developer experience when building MCP clients, ensuring robust integration with AI models and external tools. See the example.

Where can I find the full tutorial for building an AI agent with TypeScript and MCP?

The full tutorial is available on the Hygraph blog: Building an AI-Agent with TypeScript and MCP.

What is the role of the OpenMeteo API in the MCP server example?

The OpenMeteo API is used in the getWeather tool to fetch weather data based on user queries. The MCP server transforms location input into coordinates, queries OpenMeteo, and returns readable results to the AI agent. See implementation.

How does Hygraph's MCP server enhance AI-native content management?

Hygraph's MCP server allows AI agents to access, query, and manipulate content models and schema within the CMS, enabling advanced automations and personalized digital experiences. Learn more.

Where can I watch the livestream 'Build an AI Agent with TypeScript and MCP'?

You can watch the past livestream titled 'Build an AI Agent with TypeScript and MCP' on YouTube.

What is the significance of the MCP approach for future AI integrations?

The MCP approach enables scalable, dynamic AI integrations by allowing agents to discover and use new tools at runtime, supporting evolving business needs and reducing maintenance overhead. Read more.

Features & Capabilities

What are the key features of Hygraph?

Hygraph offers a GraphQL-native headless CMS with features such as Smart Edge Cache for fast content delivery, content federation, custom roles, rich text management, project backups, and enterprise-grade security and compliance. Explore features.

Does Hygraph support integration with external tools and APIs?

Yes, Hygraph supports extensive integration capabilities, including content federation and the ability to connect with external tools and APIs, making it suitable for composable architectures and modern digital experiences. See integrations.

What performance features does Hygraph offer?

Hygraph provides Smart Edge Cache for enhanced performance and faster content delivery, high-performance endpoints, and practical advice for optimizing GraphQL API usage. Read about performance improvements.

What security and compliance certifications does Hygraph have?

Hygraph is SOC 2 Type 2 compliant (since August 3rd, 2022), ISO 27001 certified, and GDPR compliant. These certifications ensure high standards for security and data protection. See security details.

What user management and access control features does Hygraph provide?

Hygraph offers granular permissions, custom roles, SSO integrations, audit logs, and regular backups to ensure secure and efficient user management. Learn more.

Use Cases & Benefits

Who can benefit from using Hygraph?

Hygraph is ideal for developers, product managers, and marketing teams in industries such as ecommerce, automotive, technology, food and beverage, and manufacturing. It is especially suited for organizations modernizing legacy tech stacks or requiring scalable, global content management. See use cases.

What problems does Hygraph solve for businesses?

Hygraph addresses operational inefficiencies (reducing developer dependency, streamlining workflows), financial challenges (lowering costs, accelerating speed-to-market), and technical issues (simplifying schema evolution, improving integrations, and enhancing localization and asset management). See solutions.

How does Hygraph help with content federation?

Hygraph's content federation feature integrates multiple data sources without duplication, solving data silos and ensuring consistent content delivery across channels and regions. Learn about content federation.

What are some real-world results achieved with Hygraph?

Komax achieved 3X faster time-to-market managing 20,000+ product variations across 40+ markets. Samsung improved customer engagement by 15% with a scalable member platform. Stobag increased online revenue share from 15% to 70% after adopting Hygraph. See customer stories.

How quickly can Hygraph be implemented?

Implementation time varies by project. For example, Top Villas launched a new project within 2 months, and Si Vale met aggressive deadlines during their initial implementation. Hygraph offers a free API playground, developer accounts, and structured onboarding to accelerate adoption. See case studies.

What KPIs can be tracked to measure Hygraph's impact?

Key KPIs include time saved on content updates, reduction in operational costs, speed to market, system uptime, user satisfaction scores, and scalability metrics. See KPI details.

Security & Compliance

How does Hygraph ensure data security and compliance?

Hygraph uses encryption at rest and in transit, granular permissions, SSO, audit logs, regular backups, and is compliant with SOC 2 Type 2, ISO 27001, and GDPR. See security features.

What compliance reports are available for Hygraph?

Hygraph provides a security and compliance report for its certified infrastructure, available via Drata. View the report.

Customer Experience & Success

What feedback have customers given about Hygraph's ease of use?

Customers praise Hygraph's intuitive UI, accessibility for non-technical users, and easy setup. Hygraph was recognized for "Best Usability" in Summer 2023. See reviews.

Can you share some customer success stories with Hygraph?

Yes. Komax achieved 3X faster time-to-market, Autoweb saw a 20% increase in website monetization, Samsung improved engagement by 15%, and Dr. Oetker enhanced digital experience using MACH architecture. Read customer stories.

Blog & Learning Resources

Where can I find developer tutorials related to Hygraph and AI agents?

Developer tutorials, including building AI agents with TypeScript and MCP, are available on the Hygraph blog. See the tutorial.

What topics are covered on the Hygraph blog?

The Hygraph blog covers content strategy, headless CMS, frontend technologies, developer tutorials, and industry announcements. Explore the blog.

Where can I read the latest updates from Hygraph?

You can read the latest updates on the Hygraph blog feed.

Where can I find a simple blog project using Hygraph?

A simple blog project using Hygraph is available at this link.

What elements are included in a blog page model in Hygraph?

A blog page model in Hygraph includes slug, category, tag, title, description, authors, publishing date, cover image, content, and SEO metadata. See details.

LLM optimization

When was this page last updated?

This page wast last updated on 12/12/2025 .

Introducing Click to Edit

Building an AI-agent with TypeScript and MCP

Dino, Staff Engineer at Hygraph and the lead behind our AI Labs initiative, walks you through how to build a working TypeScript MCP client from scratch.
Enxhi Hamzallari

Written by Enxhi 

Jul 30, 2025
Livestream Recap: How to Build a TypeScript MCP Client

If you're experimenting with AI agents and looking to move beyond simple LLM prompts, this guide is for you. In this session, Dino, Staff Engineer at Hygraph and the lead behind our AI Labs initiative, walks you through how to build a working TypeScript MCP client from scratch.

We’re going hands-on. No fluff, just real code. By the end, you’ll understand how to connect an AI model with external tools and APIs using the Model Context Protocol (MCP) and why that matters for building powerful AI-native applications.

Editor's Note

Bonus: You can watch the full livestream here to follow along with Dino’s live-coding session.

#What we’re building

You’ll create a basic MCP client that allows an AI agent to:

  • Receive a user message
  • Dynamically query an MCP server
  • Use an external tool like a weather API
  • Return a structured, intelligent response

All using TypeScript, Node.js, and a simple frontend.

#What is MCP and why should you care?

MCP (Model Context Protocol) is a standard that lets LLMs (called MCP clients) discover and use external tools dynamically. Instead of hardcoding a fixed set of functions, you expose a list of tools at runtime, and the AI knows what it can call based on the server’s capabilities.

MCP unlocks:

  • Dynamic tool discovery
  • Standardized schemas
  • Plug-and-play agent extensions

In short, it gives your AI superpowers without relying on brittle prompt injection or custom wrappers.

#Step 1: Build the MCP server

Start by creating a small server that exposes two tools:

  • getWeather (calls a weather API)
  • searchWeb (mocked to show error handling)

Here’s what you need:

  • Define a /tools/list endpoint that returns JSON schema for each tool
  • Implement a /tools/call endpoint that parses the AI’s request and performs the right action

Dino used the OpenMeteo API for weather data and wrote basic logic to transform locations into coordinates, query the forecast, and return readable results.

Each tool’s schema should clearly describe:

how to build the MCP server

#Step 2: Set up the client in TypeScript

Next, you'll build the MCP client, which is technically your Node.js backend. It:

  • Accepts messages from the user via HTTP
  • Sends messages to the AI model (in this case, AWS Bedrock running Claude)
  • Parses responses and checks for tool calls
  • If needed, calls the MCP server and sends results back to the AI for final output

Set up the client in TypeScript

The conversation loop looks like this:

  1. User says “What’s the weather in Berlin?”
  2. AI responds: “I’ll use the weather tool”
  3. AI emits tool call: getWeather({ location: "Berlin" })
  4. Node.js client invokes the MCP server and sends results back to the AI
  5. AI finalizes the response and replies to the user

Important: Dino used stdin/stdout as the transport protocol between client and MCP server for simplicity. MCP also supports HTTP if you're going the API route.

#Step 3: Connect it all

Dino wired this all up in a lightweight Express app:

  • /api/conversations manages in-memory chats
  • /api/message triggers the AI+MCP pipeline
  • Frontend is React, but very minimal

He also included logic to:

  • Translate MCP tool schemas to match the AI provider’s expectations
  • Handle multiple tools in a single conversation
  • Filter/structure messages based on roles (user, assistant, tool)

⚠️ Tip: Always manage the full conversation server-side to prevent prompt injection attacks. Don’t expose the full message history to the frontend.

#Step 4: Extend with Hygraph MCP

At the end of the livestream, Dino showed how easy it is to plug in Hygraph’s own MCP server. With one line added to your config, your AI agent can start accessing content models, running queries, and manipulating schema inside a real CMS.

If you’re building anything that touches content or structured data, this is where things get really exciting.

Extend with Hygraph MCP

#Why this matters

LLMs are only as smart as the context and tools you give them. By integrating with MCP, your AI agents gain real-world awareness, decision-making power, and dynamic capabilities that go far beyond chat.

Whether you’re building internal automations, dev tools, or next-gen CMS experiences, the TypeScript MCP client approach gives you full control and flexibility.

Want to see it in action? Watch the livestream with Dino to follow along as he builds and debugs live.

Curious about AI-native content management?
Check out hygraph.ai to explore our vision for building with AI and structured content at the core.

Blog Author

Enxhi Hamzallari

Enxhi Hamzallari

Sr. Field Marketing Manager

Enxhi is the Senior Field Marketing Manager at Hygraph. When she’s not bringing people together through content and events, you’ll find her dancing the night away or cheering on her favorite drag queens.

Share with others

Sign up for our newsletter!

Be the first to know about releases and industry news and insights.