Frequently Asked Questions

API Limits & Technical Details

What are the API limits for Hygraph's GraphQL API?

Hygraph enforces API limits to ensure optimal performance and availability for all users. These limits include request size, requests per second, and concurrent operations. Limits vary by plan (Hobby, Growth, Enterprise) and are enforced on uncached GraphQL queries for shared regions. Limits can be lifted on dedicated clusters and enterprise plans. For more details, see the API Limits documentation.

What is the maximum request size for queries and mutations in Hygraph?

The maximum request size depends on your subscription plan:

Exceeding these limits results in a 413 error. Test your queries during development and break large requests into smaller, focused queries for best results.

How many uncached requests per second can I send to Hygraph's Content API?

The requests per second (RPS) limit varies by plan:

Exceeding this limit results in a 429 error. Cached requests via CDN are not subject to these limits.

What are the concurrency limits for queries and mutations in Hygraph?

Concurrency limits (number of uncached GraphQL operations running simultaneously per environment) are:

Exceeding these limits also results in a 429 error. Use connection pooling and request queuing to manage load efficiently.

How do requests per second and concurrent operations differ in Hygraph?

Requests per second (RPS) measures how many new requests you start each second, while concurrent operations count how many queries or mutations are running at the same time, even if they came from the same request. You can stay within one limit while exceeding the other, so always design your queries and requests to remain under both limits. See examples in the documentation.

What strategies and best practices can help me avoid hitting Hygraph's API limits?

To avoid hitting API limits, Hygraph recommends:

For code examples and more details, see the API Limits documentation.

How do I handle API rate limits in Next.js, Gatsby, and Nuxt when using Hygraph?

Hygraph provides framework-specific strategies for handling API rate limits:

See the documentation for code samples and more details.

Features & Capabilities

What performance features does Hygraph offer for content management and delivery?

Hygraph delivers exceptional performance through features like Smart Edge Cache for faster global content delivery, high-performance endpoints for reliability and speed, and measured GraphQL API performance. These features help businesses with high traffic and global audiences achieve optimal results. Learn more in the performance improvements blog post.

What security and compliance certifications does Hygraph have?

Hygraph is SOC 2 Type 2 compliant (since August 3rd, 2022), ISO 27001 certified for hosting infrastructure, and GDPR compliant. These certifications ensure robust data protection and adherence to international standards. For more details, visit the security features page and security report.

What are the key capabilities and benefits of Hygraph?

Hygraph offers operational efficiency (eliminates developer dependency, streamlines workflows), financial benefits (reduces costs, accelerates speed-to-market), technical advantages (GraphQL-native architecture, content federation, enterprise-grade security), and unique features like Smart Edge Cache, custom roles, rich text management, and project backups. Proven results include 3x faster time-to-market (Komax), 15% improved customer engagement (Samsung), and increased online revenue share (Stobag). Source: Hygraph Case Studies.

Pricing & Plans

What is Hygraph's pricing model?

Hygraph offers a Free Forever Developer Account, self-service plans (e.g., Growth Plan at $299/month or $199/month billed annually), and custom enterprise pricing starting at $900/month. Plans include 1,000 entries, with add-ons available for additional entries, locales, API calls, asset traffic, and content stages. For full details, visit the Hygraph Pricing Page.

Support & Implementation

How easy is it to get started with Hygraph, and what resources are available?

Hygraph makes onboarding simple with a free API Playground, Free Forever Developer Account, and structured onboarding (introduction call, account provisioning, business/technical/content kickoffs). Training resources include webinars, live streams, how-to videos, and extensive documentation (Hygraph Documentation). Customers can request a demo for larger projects.

What customer service and support options does Hygraph provide?

Hygraph offers 24/7 support via chat, email, and phone, real-time troubleshooting through Intercom chat, a community Slack channel (join here), extensive documentation, training resources, and a dedicated Customer Success Manager for enterprise customers. Structured onboarding ensures a smooth start for all users. Source: Hygraph Documentation.

How does Hygraph handle maintenance, upgrades, and troubleshooting?

Hygraph is a cloud-based platform that handles all deployment, updates, security, and infrastructure maintenance. Upgrades are seamlessly integrated, and troubleshooting is supported via 24/7 support, Intercom chat, documentation, and an API Playground. Enterprise customers receive a dedicated Customer Success Manager. Source: manual.

Use Cases & Benefits

Who is the target audience for Hygraph?

Hygraph is designed for developers, product managers, and marketing teams in industries such as ecommerce, automotive, technology, food and beverage, manufacturing, transportation, staffing, and science. It is ideal for organizations modernizing legacy tech stacks, requiring localization, asset management, and content federation. Source: ICPVersion2_Hailey.pdf.

What core problems does Hygraph solve?

Hygraph solves operational inefficiencies (developer dependency, legacy tech stack modernization, content inconsistency), financial challenges (high costs, slow speed-to-market, scalability), and technical issues (schema evolution, integration difficulties, cache and performance bottlenecks, localization, asset management). Source: Hailey Feed .pdf.

What business impact can customers expect from using Hygraph?

Customers can expect improved speed-to-market (e.g., Komax achieved 3x faster launches), enhanced customer engagement (Samsung saw a 15% increase), increased revenue (Stobag's online share rose from 15% to 70%), cost efficiency, scalability, and a future-proof solution. Source: Hygraph Case Studies.

Can you share specific case studies or success stories of customers using Hygraph?

Yes. Examples include:

See more at the Hygraph Case Studies Page.

Competition & Comparison

Why should a customer choose Hygraph over alternatives?

Hygraph offers unique features (Smart Edge Cache, content federation, rich text management, custom roles, project backups), business benefits (speed-to-market, lower total cost of ownership, scalability), technical advantages (developer-friendly APIs, seamless integration, security/compliance), and proven success with global brands. Hygraph is best for organizations seeking a future-proof, composable, and scalable content management solution. Source: Hygraph Case Studies.

Help teams manage content creation and approval in a clear and structured way
Hygraph
Docs

#API Limits

API limits are technical safeguards that ensure your GraphQL API performs optimally and remains available for all users. These limits guard against common problems, such as big requests, inefficient queries, or high traffic, that could otherwise impact performance.

#Request size

The maximum size of GraphQL queries and mutations, including the text and variables, as it reaches our API. This helps prevent oversized requests that could slow down your API. When you exceed this limit, you'll get a 413 error.

PlanLimit
Hobby
  • Queries: 10 KB
  • Mutations: 30 KB
Growth
  • Queries: 15 KB
  • Mutations: 70 KB
Enterprise
  • Queries: 20 KB
  • Mutations: 80 KB

Follow these steps to check your query/ mutation request size.

  1. In your browser, go to Developer Tools > Network.
  2. Run the GraphQL request in the API Playground.
  3. Click on the request and check the Request Payload or Size column.
  4. Compare the size against our limits.

We recommend that you:

  • Test your queries against limits during development.

  • Break large queries into smaller, focused requests. For example: Instead of a query like this:

    You can split it into three separate queries:

  • Use GraphQL fragments to avoid repetition. For example: Instead of a query like this:

  • Instead of relying on long and complex variable filters, use pagination to handle large data sets.

    For example: Instead of a query like this:

    You can split it into:

We recommend that you do not:

  • Fetch unnecessary fields in your queries.
  • Create overly complex nested queries.
  • Ignore limit violation errors without addressing root causes.

#Requests per second

The number of uncached requests you can send to the Content API per second. A single request can contain multiple queries and mutations. When you exceed this limit, you'll get a 429 error.

PlanLimit
Hobby5 req/sec
Growth25 req/sec
EnterpriseUp to 500 req/sec

#Concurrent operations

The number of uncached GraphQL operations (queries / mutations) that run simultaneously per environment. Multiple operations bundled in a single request count toward the concurrency limit. This helps prevent resource exhaustion during traffic spikes. When you exceed this limit, you'll get a 429 error.

PlanLimit per environment
Hobby
  • Queries: 10
  • Mutations: 5
Growth
  • Queries: 30
  • Mutations: 10
Enterprise
  • Queries: 60
  • Mutations: 20

To stay below the limit, count the total number of queries and mutations sent simultaneously, not just the number of fired requests. If a request contains five queries, it counts as five concurrent queries. You can also measure the total in-flight operations at any moment, by using a counter or concurrency control library in your script or code.

We recommend that you:

  • Implement proper retry logic with exponential backoff to handle temporary errors.
  • Distribute requests over time rather than sending large bursts at once.
  • Apply connection pooling and request queuing to manage load efficiently.
  • Monitor your application's concurrent request patterns.

We recommend that you do not:

  • Retry immediately after receiving a 429 error.
  • Send large batches of requests simultaneously.
  • Ignore concurrent limit violations in your error handling.

#Requests per second vs. Concurrent operations

These two limits measure different things:

  • Requests per second (RPS): How many new requests you start each second.
  • Concurrent operations: How many queries or mutations are running at the same time, even if they came from the same request.

It’s possible to stay within one limit while exceeding the other. Always design your queries and requests to remain under both the RPS and concurrency limits.

ScenarioRequest per secondConcurrent operationsResult
20 requests per second, each finishes in ~50 ms
  • RPS = 20
  • Within Growth plan limit.
  • Concurrency ≈ 1 at any time, since requests complete quickly.
  • Within Growth plan limit.
Both within limits.
5 requests per second, each contains 10 queries. Each query takes ~2 seconds to finish.
  • RPS = 5
  • Within Growth plan limit.
  • Concurrency = 50 (5 requests × 10 queries still running)
  • Exceeds Growth plan limit.
Exceeds concurrency limit.
40 requests sent at once, each with 1 query.
  • RPS = 40
  • Exceeds Growth plan limit.
  • Concurrency = 40
  • Exceeds Growth plan limit.
Exceeds both limits.

#Handling API rate limits

In this section, learn how to handle API rate limits with Next.js, Gatsby, and Nuxt.

#Next.js

#Thread limiting

You can use this experimental setting in Next.js for disabling multithreading:

// Your Next.js config file (next.config.js)
...
experimental: {
workerThreads: false,
cpus: 1
},
...

This setting will force the build to be single-threaded, which limits the speed at which requests are made within the getStaticProps.

As a result, the build runs slower but completes without errors.

#Throttling

The following Next.js example uses pThrottle, and allows you to control the limit of API calls per interval.

import React from 'react';
import { allProducts } from '../../utils/getProducts';
import { gql } from '../../utils/hygraph-client';
import { throttledFetch } from '../../utils/throttle';
// Singular query used in getStaticProps
const query = gql`
query GetSingleItem($slug: String!) {
product(where: { slug: $slug }) {
name
slug
}
}
`;
export async function getStaticPaths() {
// One call to get all paths
// No need to throttle this
// Unless you have a LOT of these calls
const products = await allProducts();
const paths = products.map((product) => ({
params: { slug: product?.slug },
}));
return { paths, fallback: false };
}
export async function getStaticProps({ params }) {
// For each path, there will be an API call
// We need to throttle this
// We need to throttle it on a global throttle, so we need to set that externally
// throttleFetch comes from a utility area and is shared among all dynamic route files
/*
import pThrottle from 'p-throttle'
import hygraphClient from './hygraph-client'
// Set the limit of # of calls per interval in ms (5 per second)
const throttle = pThrottle({limit: 5, interval: 1000})
export const throttledFetch = throttle(async (...args) => {
const [query, vars] = args
const data = await hygraphClient.request(query, vars)
return data
})
*/
const product = await throttledFetch(query, { slug: params.slug });
return {
props: product,
};
}
export default function Page({ product }) {
// Each page produced by paths and props
return (
<>
<h1>{product.name}</h1>
</>
);
}

#Exponential backoff

Combining query execution with exponential backoff retries ensures applications remain reliable, even when encountering concurrent operation limits.

  1. The graphqlFetchWithRetry function provides error handling and a retry strategy with exponential backoff, ensuring that errors such as concurrent operation limits or temporary rate limiting are retried effectively.

    // lib/graphql.ts
    type GraphQLRequest = {
    endpoint: string;
    token?: string;
    query: string;
    variables?: Record<string, unknown>;
    maxRetries?: number;
    baseDelayMs?: number; // starting delay (e.g., 250ms)
    maxDelayMs?: number; // cap delay (e.g., 5000ms)
    signal?: AbortSignal;
    };
    type GraphQLResponse<T> = {
    data?: T;
    errors?: Array<{ message: string; [key: string]: unknown }>;
    };
    function sleep(ms: number, signal?: AbortSignal): Promise<void> {
    return new Promise((resolve, reject) => {
    if (signal?.aborted) return reject(new DOMException('Aborted', 'AbortError'));
    const timer = setTimeout(resolve, ms);
    signal?.addEventListener('abort', () => {
    clearTimeout(timer);
    reject(new DOMException('Aborted', 'AbortError'));
    });
    });
    }
    // Exponential backoff with jitter and a hard minimum of 1000ms per retry.
    function computeBackoffMs(
    attempt: number,
    baseDelayMs: number,
    maxDelayMs: number,
    minimumMs = 1000
    ): number {
    const exp = Math.max(minimumMs, baseDelayMs * Math.pow(2, attempt));
    const jitter = Math.floor(Math.random() * Math.min(250, exp));
    return Math.min(exp + jitter, maxDelayMs);
    }
    function containsConcurrencyError(errors?: Array<{ message: string }>): boolean {
    if (!errors) return false;
    return errors.some(
    (e) =>
    typeof e.message === 'string' &&
    e.message.toLowerCase().includes('concurrent operations limit exceeded')
    );
    }
    export async function graphqlFetchWithRetry<T = unknown>({
    endpoint,
    token,
    query,
    variables,
    maxRetries = 5,
    baseDelayMs = 250,
    maxDelayMs = 5000,
    signal,
    }: GraphQLRequest): Promise<T> {
    let lastError: unknown;
    for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
    const res = await fetch(endpoint, {
    method: 'POST',
    headers: {
    'content-type': 'application/json',
    ...(token ? { authorization: `Bearer ${token}` } : {}),
    },
    body: JSON.stringify({ query, variables }),
    signal,
    cache: 'no-store',
    });
    const is429 = res.status === 429;
    if (res.ok) {
    const json = (await res.json()) as GraphQLResponse<T>;
    const concurrencyHit = containsConcurrencyError(json.errors);
    if (!concurrencyHit) {
    if (json.errors) {
    const messages = json.errors.map((e) => e.message).join('; ');
    throw new Error(`GraphQL error: ${messages}`);
    }
    return json.data as T;
    }
    // Concurrency limit via GraphQL errors – backoff and retry
    if (attempt < maxRetries) {
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    throw new Error('Exceeded retries due to concurrent operations limit.');
    }
    // HTTP 429 – backoff and retry (minimum 1s)
    if (is429 && attempt < maxRetries) {
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    // Other non-OK statuses – surface
    const body = await res.text().catch(() => '');
    throw new Error(`HTTP ${res.status}: ${body || res.statusText}`);
    } catch (err) {
    lastError = err;
    const isAbort = err instanceof DOMException && err.name === 'AbortError';
    if (isAbort) throw err;
    if (attempt < maxRetries) {
    // Network or transient error – backoff with at least 1s
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    break;
    }
    }
    throw lastError instanceof Error ? lastError : new Error('Request failed.');
    }
  2. This example demonstrates how to execute a GraphQL query using the graphqlFetchWithRetry function defined in the previous step.

    // app/api/products/route.ts
    import { NextResponse } from 'next/server';
    import { graphqlFetchWithRetry } from '@/lib/graphql';
    const GRAPHQL_ENDPOINT = process.env.GRAPHQL_ENDPOINT!;
    const GRAPHQL_TOKEN = process.env.GRAPHQL_TOKEN!; // Store server-side only
    export async function GET() {
    try {
    const query = `
    query Products($first: Int!) {
    products(first: $first) {
    id
    name
    }
    }
    `;
    const data = await graphqlFetchWithRetry<{
    products: Array<{ id: string; name: string }>;
    }>({
    endpoint: GRAPHQL_ENDPOINT,
    token: GRAPHQL_TOKEN,
    query,
    variables: { first: 10 },
    maxRetries: 5,
    baseDelayMs: 300,
    maxDelayMs: 5000,
    });
    return NextResponse.json({ ok: true, data });
    } catch (error) {
    return NextResponse.json(
    { ok: false, message: (error as Error).message },
    { status: 429 }
    );
    }
    }

#Gatsby

#Concurrency override

You can use queryConcurrency with our official Gatsby source plugin for Hygraph projects.

This key indicates the number of promises ran at once when executing queries. Its default value is set to 10.

#Throttling

The following Gatsby example uses pThrottle, and allows you to fetch 1 concurrent request maximum, with a minimum delay of 0.5 seconds.

import { createHttpLink } from "apollo-link-http";
import pThrottle from "p-throttle";
// Throttle fetches to max 1 concurrent request and
// min. delay of 0.5 seconds.
const throttledFetch = pThrottle( (...args) => {
return fetch(...args);
}, 1, 500);
const link = createHttpLink({ uri: "/graphql" fetch: throttledFetch });

#Nuxt

#Thread limiting

You can add the following to your nuxt.config.js to avoid getting a 429 error. It will stop GraphQL requests from overloading Hygraph's API limits when building.

// Your Nuxt config file (nuxt.config.js)
generate: {
concurrency: 250, //maximum number of requests per thread. This will only build 250 at a time based on the api rate limit
interval: 200, //delay by 0.2s. You can adjust this to be higher if you still run into issues
},