Frequently Asked Questions

GraphQL Schema Stitching & Technical Concepts

What is GraphQL schema stitching?

GraphQL schema stitching is the process of combining multiple schemas from various APIs into a single schema or API. This allows you to resolve data across different databases or services through a unified GraphQL endpoint, making it easier to query and manage distributed data sources. (Source)

How does schema stitching work in practice?

Schema stitching typically follows a four-step process: 1) Introspect the remote APIs to understand their schema structure, 2) Handle type name collisions, 3) Associate which fields get added to which types, and 4) Resolve the data. This enables you to combine data from different sources, such as a content API, weather API, and business listings API, into a single GraphQL endpoint. (Source)

When should you avoid using schema stitching?

Schema stitching may not be ideal if your endpoints are not versioned or reliable, if a single endpoint could become a single point of failure, if there are differences in data TTL (time-to-live), or if performance is critical and REST APIs could be optimized for better server-to-server communication. (Source)

What are the main benefits of schema stitching?

Schema stitching allows developers to combine multiple data sets into a single, distributable, explorable, and maintainable API. It is particularly useful for MVP projects or one-off sites, enabling fast development while staying within the GraphQL ecosystem and minimizing technical context switching. (Source)

What are some practical examples of schema stitching with Hygraph?

In the Hygraph blog tutorial, schema stitching is used to combine a Geocode Weather API, Yelp API, and a Hygraph-powered content API. This setup enables querying for conferences (from Hygraph), their locations (from the weather API), and nearby hotels or restaurants (from Yelp) in a single GraphQL query. (Source)

What tools and libraries are recommended for schema stitching?

The Hygraph tutorial recommends using Apollo Server, apollo-link-http, node-fetch, and graphql-tools for schema stitching. These tools help with introspection, merging schemas, transforming types, and resolving conflicts. (Source)

How do you handle naming conflicts when stitching schemas?

Naming conflicts are handled by transforming the schemas before merging. For example, you can use prefixes or rename types and root fields using utilities like RenameTypes and RenameRootFields from graphql-tools. This ensures that types with the same name from different APIs do not collide. (Source)

What is the role of resolvers in schema stitching?

Resolvers define how to fetch and combine data from different schemas. In schema stitching, resolvers can delegate queries to the appropriate remote schema and handle the logic for connecting fields across APIs. This enables seamless data retrieval from multiple sources in a single query. (Source)

Can you use schema stitching to combine Hygraph with other APIs?

Yes, schema stitching can be used to combine Hygraph's content API with other APIs, such as weather or business listing APIs, to create a unified GraphQL endpoint that aggregates data from multiple sources. (Source)

What are some alternatives to schema stitching?

Alternatives to schema stitching include GraphQL federation and content federation. While schema stitching combines schemas at the gateway level, federation allows for distributed schema composition and is often used for more complex, large-scale architectures. (Source)

Features & Capabilities

What features does Hygraph offer for API integration?

Hygraph offers robust GraphQL APIs, content federation, and integration capabilities with third-party systems. It supports schema stitching, remote source connections, and provides a high-performance content API for low latency and high throughput. (API Reference)

Does Hygraph support content federation?

Yes, Hygraph supports content federation, allowing you to integrate multiple data sources without duplication and deliver consistent content across channels. This is particularly useful for global teams and complex digital ecosystems. (Source)

What types of APIs does Hygraph provide?

Hygraph provides several APIs, including a Content API (read & write), High Performance Content API (optimized for low latency), MCP Server API (for AI assistants), Asset Upload API, and a Management API for project structure. (API Reference)

What integrations are available with Hygraph?

Hygraph integrates with digital asset management systems (e.g., Aprimo, AWS S3, Bynder, Cloudinary, Imgix, Mux, Scaleflex Filerobot), Adminix, Plasmic, and supports custom integrations via SDKs and APIs. The Hygraph Marketplace offers pre-built apps for commerce, PIMs, and more. (Integrations Documentation)

How does Hygraph ensure high performance for content delivery?

Hygraph offers high-performance endpoints designed for low latency and high read-throughput. The platform actively measures API performance and provides best practices for optimization, as detailed in the GraphQL Report 2024. (Performance Blog)

What technical documentation is available for Hygraph?

Hygraph provides comprehensive technical documentation, including API references, schema components, references, webhooks, and AI integrations. Resources are available at Hygraph Documentation.

Use Cases & Benefits

Who can benefit from using Hygraph?

Hygraph is designed for developers, product managers, content creators, marketing professionals, and solutions architects. It is suitable for enterprises, agencies, eCommerce platforms, media companies, technology firms, and global brands. (Case Studies)

What industries are represented in Hygraph's case studies?

Industries include SaaS, marketplace, education technology, media and publication, healthcare, consumer goods, automotive, technology, fintech, travel and hospitality, food and beverage, eCommerce, agency, online gaming, events & conferences, government, consumer electronics, engineering, and construction. (Case Studies)

What business impact can customers expect from using Hygraph?

Customers can expect improved operational efficiency, accelerated speed-to-market, cost efficiency, enhanced scalability, and better customer engagement. For example, Komax achieved a 3x faster time-to-market, and Samsung improved customer engagement by 15%. (Komax, Samsung)

Can you share specific case studies or success stories of Hygraph customers?

Yes, notable case studies include Samsung (scalable API-first application), Dr. Oetker (MACH architecture), Komax (3x faster time to market), AutoWeb (20% increase in monetization), BioCentury (accelerated publishing), Voi (multilingual scaling), HolidayCheck (reduced developer bottlenecks), and Lindex Group (accelerated global content delivery). (Case Studies)

What are some use cases relevant to the pain points Hygraph solves?

Operational: HolidayCheck reduced developer bottlenecks; Dr. Oetker adopted MACH architecture; Si Vale streamlined content creation. Financial: Komax achieved faster launches and lower costs; Samsung scaled globally while reducing maintenance. Technical: Hygraph case studies highlight simplified development and robust integrations. (Case Studies)

How long does it take to implement Hygraph?

Implementation time varies by project complexity. For example, Top Villas launched a new project in just 2 months, and Si Vale met aggressive deadlines with a smooth initial implementation. (Top Villas, Si Vale)

How easy is it to start using Hygraph?

Hygraph offers a free API playground, a free forever developer account, structured onboarding, training resources, extensive documentation, and a community Slack channel for support. (Documentation)

Pricing & Plans

What does the Hygraph Hobby plan cost?

The Hobby plan is free forever and is ideal for individuals working on personal projects or exploring the platform. It includes 2 locales, 3 seats, 2 standard roles, 10 components, unlimited asset storage, 50MB per asset upload, live preview, and commenting workflow. (Pricing)

What features are included in the Growth plan?

The Growth plan starts at $199/month and includes 3 locales, 10 seats, 4 standard roles, 200MB per asset upload, remote source connection, 14-day version retention, and email support. (Pricing)

What is included in the Hygraph Enterprise plan?

The Enterprise plan offers custom pricing and includes custom limits on users, roles, entries, locales, API calls, components, and more. It features scheduled publishing, dedicated infrastructure, global CDN, security controls, SSO, multitenancy, backup recovery, custom workflows, dedicated support, and custom SLAs. (Pricing)

How can I get started with a Hygraph plan?

You can sign up for the Hobby or Growth plan directly on the Hygraph website, or request a demo and a 30-day trial for the Enterprise plan. (Pricing)

Security & Compliance

What security certifications does Hygraph have?

Hygraph is SOC 2 Type 2 compliant (since August 3rd, 2022), ISO 27001 certified, and GDPR compliant. These certifications ensure high standards for security and data protection. (Security Features)

How does Hygraph ensure data security and compliance?

Hygraph provides granular permissions, audit logs, SSO integrations, encryption at rest and in transit, regular backups, and dedicated hosting options. It uses ISO 27001-certified providers and offers a process for reporting security incidents. (Security Features)

Customer Experience & Support

What feedback have customers given about Hygraph's ease of use?

Customers praise Hygraph for its intuitive user interface, ease of setup, and ability for non-technical users to manage content independently. Real-time changes and custom app integrations are also highlighted. (Try Hygraph, Enterprise)

What support resources are available for Hygraph users?

Hygraph offers extensive documentation, webinars, live streams, how-to videos, and a community Slack channel for support and knowledge sharing. (Documentation)

Competition & Market Position

How does Hygraph differentiate itself from other CMS platforms?

Hygraph is the first GraphQL-native Headless CMS, offering content federation, user-friendly tools, enterprise-grade features, and proven ROI. It ranked 2nd out of 102 Headless CMSs in the G2 Summer 2025 report and was voted the easiest to implement headless CMS for the fourth time. (Case Studies, G2 Report)

Why choose Hygraph over alternatives like Contentful or Sanity?

Hygraph stands out with its GraphQL-native architecture, content federation, cost efficiency, accelerated speed-to-market, robust integration capabilities, and enterprise-grade security. Its unique approach to schema evolution and user-friendly tools make it ideal for modern digital teams. (Case Studies)

Pain Points & Solutions

What core problems does Hygraph solve?

Hygraph addresses operational inefficiencies (eliminating developer dependency), modernizes legacy tech stacks, ensures content consistency, improves workflows, reduces costs, accelerates speed-to-market, simplifies schema evolution, and enhances localization and asset management. (Case Studies)

What pain points do Hygraph customers commonly express?

Customers often mention developer dependency, legacy tech stack challenges, content inconsistency, workflow inefficiencies, high operational costs, slow speed-to-market, scalability issues, complex schema evolution, integration difficulties, performance bottlenecks, and localization challenges. (Case Studies)

Introducing Click to Edit

GraphQL Schema Stitching and Enhancing Content APIs

Have you heard about schema stitching yet? Wonder what it is, when to use it or why? This is the post to get you started.
Jesse Martin

Written by Jesse 

Jun 05, 2019
Schema Stitching with a Headless CMS

UPDATE: While not entirely the same thing, there’s a new kid on the block for composing schemas called federation. We’ll be dropping our take on that in a few weeks. For now, enjoy the content below for a deep-dive into the land of GraphQL schemas.

GraphQL schema stitching is an excellent way to enhance APIs with a wealth of extra data. The concept is simple and the execution is straight forward. How it happens, however, is anything but. What lies beneath is a complex game of delayed requests, resolver assignments and more – but thanks to helpful folks at Apollo, we can let the tooling do the heavy lifting and give us the benefits of schema stitching with minimum effort.

#The High-Level Take on GraphQL Stitching

Schema Stitching is the process of combining multiple schemas from various APIs into a single Schema / API. GraphQL is a fantastic technology that allows the server to resolve our data across tables via a clean, query syntax, but what happens when those tables are actually different databases all together living on different servers?

Wouldn’t it be great if we could resolve a list of hotels in one database with a list of regional activities in another through a single query? That’s what schema stitching allows us to do.

Schema stitching follows a four-step process:

  1. Introspect the remote APIs. (Finding out what schema structure you have to work with.)
  2. Handle type name collisions.
  3. Associate which fields get added to which types.
  4. Resolve the data.

#When to Ditch the Stitch

It's not always a good idea to stitch your schemas together. Here are some reasons why you might not want to stitch your schemas.

  1. The endpoints are not versioned or reliable and might change on you without proper notice.
  2. One endpoint for all your data also means one endpoint to take down the project.
  3. Difference in TTL for your data.
  4. Performance is a critical factor, you can optimize REST for better performance in server-to-server communication.

With the gotchas in mind, stitching is a great way to combine multiple data sets into a single distributable, explorable and maintainable API. Particularly in the API architecture for MVP projects or one-off sites, it's a great way to get the developers up and running, fast while staying in the GraphQL eco-system and not having the overhead of technical context switching.

#Let us Begin

Our goal today is to combine three different APIs. We will combine a Geocode Weather API, Yelp and a content API powered by Hygraph.

Requirements

Outcomes

Our data desires for the APIs are as follows:

  • Hygraph

    • Resource: Contains a list of conferences as well as their location
    • Shape: There are a number of fields present that pertain to Conferences such as date, Call-for-Paper deadlines, etc. Critical for our needs are the city, country and start date fields on the Conference type.
  • Geocode Weather (GeocodeQL)

    • Resource: Allows us to do a reverse lookup from the string location info and resolve that against current and historical weather data. If you are unfamiliar with reverse lookup, it is similar to a vlookup in Excel. You give a known string value for a place, and it looks for any known latitudinal (Lat) and longitudinal (Lon) coordinates that correspond to that location. Basically, you give the value “Seattle, United States” and it gives you a coordinate in the pattern of x.xxxxx, x.xxxxx.
    • Shape: The primary field we are concerned about here is “location” which takes a “place” argument, which will be our string value. Internally it will resolve fields such as Lat, Lon, Weather, Moon Phase, etc.
  • Yelp

    • Resource: Also contains a location field that resolves with fields such as nearby restaurants, hotels, etc.
    • Shape: The type we are concerned with here is called Businesses.

Steps

Now that you have all the requirements handled and we understand the goal of this tutorial, let’s begin. If you’d like to skip to the end, clone this repo to get started even faster.

Step 0: Build Tooling

The first step is to get our project directory created and ready to go. To begin, run the following command at the root directory where you’d like to create a new project. We will be using yarn but you can use whichever library you would like.

yarn init

Follow the prompts to get your package.json file configured.

Since we will want to work with modern javascript, we are going to utilize some ES6 compile scripts with babel.

Create a new file called .babelrc, note the leading period, it’s important. You’re .babelrc file should contain the following content:

{
"presets": ["@babel/preset-env"]
}

` You’ll need to install this dependency as well, along with a few other Babel dependencies that we’ll get to in a minute, so we’ll add them now.

yarn add -D \
@babel/core \
@babel/cli \
@babel/preset-env \
@babel/node \
@babel/register
yarn add @babel/polyfill

To utilise babel, we need to set up some development scripts, inside of package.json, add or replace a property called “scripts” with the following code:

"scripts": {
"clean": "rm -rf build && mkdir build",
"build-babel": "babel -d ./build ./src -s",
"build": "npm run clean && npm run build-babel",
"start": "npm run build && node ./build/app.js",
"dev": "./node_modules/.bin/nodemon --exec babel-node ./src/app.js"
},

This will let us create builds of our server code that run on any platform that supports node.js and it creates a development script for us that will restart the server when our code has changed. Notice we’ve added another dependency, nodemon, so let’s add that with the following command:

yarn add -D nodemon

Create a new folder at the root of your folder called src and navigate inside. Create two files as indicated below.

src/
|-app.js
|-index.js

Inside of app.js we will require our ES6 code which allows Babel to transpile the code into ES5 syntax. Open app.js and include the following code:

import "babel-core/register";
import "babel-polyfill";
import "./index.js";

That’s it for this file, we won’t be revisiting it again. Its entire job is to tell Babel to work its Babel magic on the index imported at the bottom.

Step 1: Create the Server

We will begin with a server. There’s a large number of server libraries out there, but since we will be working with other Apollo libraries and because Apollo resolves web requests to a GraphiQL interface for us for free, we’ll use that to begin with.

import { ApolloServer } from 'apollo-server';
// Server Function
async function run(){
// Server Code - the end.
const server = new ApolloServer();
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

This code won’t do anything yet though since we have no Schema to pass to the ApolloServer.

Step 2: Credentials

Next, we’ll define our endpoints and our credentials. We’ll be using the library dotenv to read our secret auth tokens and authorize our requests against the Yelp API.

yarn add dotenv

At the root of our project (not in src) we’ll create a new file called .env

You’ll need to get an auth token to use with the YELP API, you can follow the instructions here to get one.

Inside of the .env file, add the following content, being sure to replace “YOUR_YELP_TOKEN" with the one you get from the process above.

YELP_TOKEN=YOUR_YELP_TOKEN

This would be a good time to ensure we don’t push our credentials or other cruft code to our repository, let’s create a new file (at the root) called .gitignore which will specify a name or pattern of names to avoid publishing. Add the following code to the .gitignore file.

node_modules/
.env
build/

Now, inside of our index.js file, we’ll add a line of code requiring our credentials and making them available for later use. At the top of the file, add the following line:

require('dotenv').config()

Additionally, we’ll add some constants for our API strings.

const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';

Note, that if you are not running your weather API locally, you’ll need to add a new URL for wherever you have that server running.

Our index.js file should now look like this following:

require("dotenv").config()
import { ApolloServer } from 'apollo-server';
// Our APIs
const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';
// Server Function
async function run(){
// Server Code - the end.
const server = new ApolloServer({ schema });
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

Step 3: Inspecting our APIs

As mentioned above, a critical step in API stitching is inspecting the types and fields that are available at the various endpoints. This makes the API available for transforming with additional Apollo tooling. To begin, we’ll create a helper function that will let us inspect the remote APIs and make them available for processing.

/* First we need to fetch our remote APIs,
inspect their content and then apply the use
Apollo to merge their schemas. */
const createRemoteSchema = async (uri,settings) => {
const config = {uri: uri, fetch, ...settings}
try {
const link = new HttpLink(config);
// Introspection is what gives us
//the self documenting magic of GraphQL
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link,
});
} catch (error) {
console.log(error)
}
};

If you’re reading carefully, you’ll note two new dependencies called HttpLink and fetch - let’s add them now.

yarn add apollo-link-http node-fetch

We’ll import them below our ApolloServer imports.

import { HttpLink } from 'apollo-link-http';
import fetch from 'node-fetch';

Next, we’ll use the helper function to make those APIs available. In the Yelp API, we’ll pass our credentials from our .env file. We read that from the process.env global (to our server, this won’t be available to our front-end).

// Process the APIs
const remoteWeatherAPI = await createRemoteSchema(WEATHER_API)
const myRemoteAPI = await createRemoteSchema(MY_API)
const remoteYelp = await createRemoteSchema(YELP_API, {
credentials: "include",
headers: {
"Authorization": `Bearer ${process.env.YELP_TOKEN}`
}
})

Step 4: Handle Conflicts

Since we are combining multiple APIs that are likely maintained by different teams if not companies, we need check for name collisions. Apollo’s default behaviour is simply to take the last API in as the final definition in the case of a naming conflict. We can account for those conflicts by transforming the API before we merge them in our final step.

We’ll be transforming both our own content API as well as the Yelp API since they each have a naming collision with the Location type.

/* Here I rename some more collisions around the name 'location'
- but I also remove all non query operations just to keep things
cleaner. We see those from our Hygraph API. */
const myTransformedAPI = transformSchema(myRemoteAPI, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `GCMS_${name}` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `GCMS_${name}` : name),
]);
const yelpTransformedAPI = transformSchema(remoteYelp, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `Place` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `place` : name),
]);

In the first example, I simply create a prefix for the type, though it’s possible to completely rename the type as is the case for the Yelp schema. We need to handle collisions on both the type name as well as the root field query.

Again, we’ve added a new dependency.

yarn add graphql-tools

Let’s import them below our Apollo imports. We’re including a few more helpers as well that will be used in our next steps.

import {
makeRemoteExecutableSchema,
mergeSchemas,
transformSchema,
FilterRootFields,
RenameTypes,
RenameRootFields,
introspectSchema
} from 'graphql-tools';

Step 5: Create Relationships

The final phase before merging the schemas is to create a new schema that connects our different schemas together. We’ll add this block, written in the Schema Definition Language (SDL)

/* This is an important step, it lets us tell the schema
which fields should be connected between the schemas. */
const linkTypeDefs = `
extend type Conference {
location: Location
}
extend type Location {
hotels: Businesses
food: Businesses
}
`;

Here we’ve told our Conference type coming from Hygraph to add a new location field that resolves to the Location type from our Weather API. We’ve told the Location type to include a hotels and food field that connects to Yelp’s Businesses Type.

Step 6: Merge the Schemas

The moment has come, we finally merge the schemas together with the following method.

const schema = mergeSchemas({
schemas: [...],
resolvers: {...}
}

The schema property is simply a list of our final transformed APIs.

// Merge these Schemas
schemas: [
remoteWeatherAPI,
myTransformedAPI,
yelpTransformedAPI,
linkTypeDefs,
],

The resolver is a list of Types with their fields and the logic to be applied when resolving the data. Let’s break down one of them.

resolvers: {
// Which type gets the new fields
Conference: {
// Which field (defined in our linkTypeDefs)
location: {
// What's the 'value' we will pass
// in from our existing Schema,
// effectively a subquery.
fragment:
`... on Conference { city, country, startDate }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteWeatherAPI,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'location',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
place: `${response.city}, ${response.country}`,
date: `${response.startDate}`
},
context,
info,
transforms: myTransformedAPI.transforms
});
},
},
},
}

So, our type Conference (from GCMS) has the field location as defined in our linkTypeDefs.

Field location is running essentially a subquery for city, country and startDate from Conference, and passing that data into a resolver as the response. The resolve method takes the response, pulls the variables it needs off the response object and passes them as args to the schema it delegates for this data, our remoteWeatherAPI schema. It defines the operation, a query which could also have been a mutation, and which field on the root Query type to check for the response data. It then merges in the remaining info of the schema, context, info and the transforms we defined above.

Adding the remainder of our resolvers, along with the rest of our code and we have a complete server delivering a stitched schema for our consumption.

require("dotenv").config()
import { ApolloServer } from 'apollo-server';
import { HttpLink } from 'apollo-link-http';
import {
makeRemoteExecutableSchema,
mergeSchemas,
transformSchema,
FilterRootFields,
RenameTypes,
RenameRootFields,
introspectSchema
} from 'graphql-tools';
import fetch from 'node-fetch';
// Our APIs
const WEATHER_API = 'https://localhost:9000';
const MY_API = 'https://api-euwest.hygraph.com/v1/cjslyzurw378n01bs1c3ip1ds/master';
const YELP_API = 'https://api.yelp.com/v3/graphql';
// Server Function
async function run(){
/* First we need to fetch our remote APIs,
inspect their content and then apply the use
Apollo to merge their schemas. */
const createRemoteSchema = async (uri,settings) => {
const config = {uri: uri, fetch, ...settings}
try {
const link = new HttpLink(config);
// Introspection is what gives us
//the self documenting magic of GraphQL
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link,
});
} catch (error) {
console.log(error)
}
};
// Process the APIs
const remoteWeatherAPI = await createRemoteSchema(WEATHER_API)
const myRemoteAPI = await createRemoteSchema(MY_API)
const remoteYelp = await createRemoteSchema(YELP_API, {
credentials: "include",
headers: {
"Authorization": `Bearer ${process.env.YELP_TOKEN}`
}
})
/* Here I rename some more collisions around the name 'location'
- but I also remove all non query operations just to keep things
cleaner. We see those from our Hygraph API. */
const myTransformedAPI = transformSchema(myRemoteAPI, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `GCMS_${name}` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `GCMS_${name}` : name),
]);
const yelpTransformedAPI = transformSchema(remoteYelp, [
new FilterRootFields(
(operation, rootField) => operation === 'Query'
),
new RenameTypes((name) =>
name === 'Location' ? `Place` : name),
new RenameRootFields((operation, name) =>
name === 'location' ? `place` : name),
]);
/* This is an important step, it lets us tell the schema
which fields should be connected between the schemas. */
const linkTypeDefs = `
extend type Conference {
location: Location
}
extend type Location {
hotels: Businesses
food: Businesses
}
`;
/* Finally we merge the schemas but also add the resolvers
which tells GraphQL how to resolve our newly added fields. */
const schema = mergeSchemas({
// Merge these Schemas
schemas: [
remoteWeatherAPI,
myTransformedAPI,
yelpTransformedAPI,
linkTypeDefs,
],
// Resolve them here
resolvers: {
// Which type gets the new fields
Conference: {
// Which field
location: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country, startDate }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteWeatherAPI,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'location',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
place: `${response.city}, ${response.country}`,
date: `${response.startDate}`
},
context,
info,
transforms: myTransformedAPI.transforms
});
},
},
},
Location: {
// Which field
hotels: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteYelp,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'search',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
location: `${response.city}, ${response.country}`,
term: "Hotels"
},
context,
info,
transforms: yelpTransformedAPI.transforms
});
},
},
food: {
// What's the 'value' we will pass in from our existing Schema
fragment: `... on Conference { city, country }`,
resolve(response, args, context, info) {
return info.mergeInfo.delegateToSchema({
// Which Schema returns the data for the field above
schema: remoteYelp,
// What's the operation it should perform
operation: 'query',
// What field is is querying ON the delegated Schema?
fieldName: 'search',
// What arguments do we pass in -
// from our query above which is a JSON response?
args: {
location: `${response.city}, ${response.country}`,
term: "Burgers"
},
context,
info,
// transforms: myTransformedAPI.transforms
});
},
}
}
}
});
// Server Code - the end.
const server = new ApolloServer({ schema });
server.listen(8000).then(({ url }) => {
console.log(`? Server ready at ${url}`);
});
}
try {
console.log('get ready')
run()
} catch (e) {
console.log(e)
}

#Closing

If you’ve tracked with all of that, good for you! There was a lot of ground covered, but hopefully, you found it helpful.

Again, not every situation, and perhaps even most situations, won’t need schema stitching. It is, however, a powerful tool in the GraphQL toolbox for the situation where it is needed, and Apollo makes it very straight forward to work with. Using tools like schema stitching allows you to combine features like a static content hub in Hygraph with real-time data like the weather.

As always, if you found something helpful or an error that needs to be corrected, let us know in the community Slack! We’re always hanging around. Thanks for reading!

Blog Author

Jesse Martin

Jesse Martin

Share with others

Sign up for our newsletter!

Be the first to know about releases and industry news and insights.