Help teams manage content creation and approval in a clear and structured way
Hygraph
Docs

#API Limits

API limits are technical safeguards that ensure your GraphQL API performs optimally and remains available for all users. These limits guard against common problems, such as big requests, inefficient queries, or high traffic, that could otherwise impact performance.

#Request size

The maximum size of GraphQL queries and mutations, including the text and variables, as it reaches our API. This helps prevent oversized requests that could slow down your API. When you exceed this limit, you'll get a 413 error.

PlanLimit
Hobby
  • Queries: 10 KB
  • Mutations: 30 KB
Growth
  • Queries: 15 KB
  • Mutations: 70 KB
Enterprise
  • Queries: 20 KB
  • Mutations: 80 KB

Follow these steps to check your query/ mutation request size.

  1. In your browser, go to Developer Tools > Network.
  2. Run the GraphQL request in the API Playground.
  3. Click on the request and check the Request Payload or Size column.
  4. Compare the size against our limits.

We recommend that you:

  • Test your queries against limits during development.

  • Break large queries into smaller, focused requests. For example: Instead of a query like this:

    You can split it into three separate queries:

  • Use GraphQL fragments to avoid repetition. For example: Instead of a query like this:

  • Instead of relying on long and complex variable filters, use pagination to handle large data sets.

    For example: Instead of a query like this:

    You can split it into:

We recommend that you do not:

  • Fetch unnecessary fields in your queries.
  • Create overly complex nested queries.
  • Ignore limit violation errors without addressing root causes.

#Requests per second

The number of uncached requests you can send to the Content API per second. A single request can contain multiple queries and mutations. When you exceed this limit, you'll get a 429 error.

PlanLimit
Hobby5 req/sec
Growth25 req/sec
EnterpriseUp to 500 req/sec

#Concurrent operations

The number of uncached GraphQL operations (queries / mutations) that run simultaneously per environment. Multiple operations bundled in a single request count toward the concurrency limit. This helps prevent resource exhaustion during traffic spikes. When you exceed this limit, you'll get a 429 error.

PlanLimit per environment
Hobby
  • Queries: 10
  • Mutations: 5
Growth
  • Queries: 30
  • Mutations: 10
Enterprise
  • Queries: 60
  • Mutations: 20

To stay below the limit, count the total number of queries and mutations sent simultaneously, not just the number of fired requests. If a request contains five queries, it counts as five concurrent queries. You can also measure the total in-flight operations at any moment, by using a counter or concurrency control library in your script or code.

We recommend that you:

  • Implement proper retry logic with exponential backoff to handle temporary errors.
  • Distribute requests over time rather than sending large bursts at once.
  • Apply connection pooling and request queuing to manage load efficiently.
  • Monitor your application's concurrent request patterns.

We recommend that you do not:

  • Retry immediately after receiving a 429 error.
  • Send large batches of requests simultaneously.
  • Ignore concurrent limit violations in your error handling.

#Requests per second vs. Concurrent operations

These two limits measure different things:

  • Requests per second (RPS): How many new requests you start each second.
  • Concurrent operations: How many queries or mutations are running at the same time, even if they came from the same request.

It’s possible to stay within one limit while exceeding the other. Always design your queries and requests to remain under both the RPS and concurrency limits.

ScenarioRequest per secondConcurrent operationsResult
20 requests per second, each finishes in ~50 ms
  • RPS = 20
  • Within Growth plan limit.
  • Concurrency ≈ 1 at any time, since requests complete quickly.
  • Within Growth plan limit.
Both within limits.
5 requests per second, each contains 10 queries. Each query takes ~2 seconds to finish.
  • RPS = 5
  • Within Growth plan limit.
  • Concurrency = 50 (5 requests × 10 queries still running)
  • Exceeds Growth plan limit.
Exceeds concurrency limit.
40 requests sent at once, each with 1 query.
  • RPS = 40
  • Exceeds Growth plan limit.
  • Concurrency = 40
  • Exceeds Growth plan limit.
Exceeds both limits.

#Handling API rate limits

In this section, learn how to handle API rate limits with Next.js, Gatsby, and Nuxt.

#Next.js

#Thread limiting

You can use this experimental setting in Next.js for disabling multithreading:

// Your Next.js config file (next.config.js)
...
experimental: {
workerThreads: false,
cpus: 1
},
...

This setting will force the build to be single-threaded, which limits the speed at which requests are made within the getStaticProps.

As a result, the build runs slower but completes without errors.

#Throttling

The following Next.js example uses pThrottle, and allows you to control the limit of API calls per interval.

import React from 'react';
import { allProducts } from '../../utils/getProducts';
import { gql } from '../../utils/hygraph-client';
import { throttledFetch } from '../../utils/throttle';
// Singular query used in getStaticProps
const query = gql`
query GetSingleItem($slug: String!) {
product(where: { slug: $slug }) {
name
slug
}
}
`;
export async function getStaticPaths() {
// One call to get all paths
// No need to throttle this
// Unless you have a LOT of these calls
const products = await allProducts();
const paths = products.map((product) => ({
params: { slug: product?.slug },
}));
return { paths, fallback: false };
}
export async function getStaticProps({ params }) {
// For each path, there will be an API call
// We need to throttle this
// We need to throttle it on a global throttle, so we need to set that externally
// throttleFetch comes from a utility area and is shared among all dynamic route files
/*
import pThrottle from 'p-throttle'
import hygraphClient from './hygraph-client'
// Set the limit of # of calls per interval in ms (5 per second)
const throttle = pThrottle({limit: 5, interval: 1000})
export const throttledFetch = throttle(async (...args) => {
const [query, vars] = args
const data = await hygraphClient.request(query, vars)
return data
})
*/
const product = await throttledFetch(query, { slug: params.slug });
return {
props: product,
};
}
export default function Page({ product }) {
// Each page produced by paths and props
return (
<>
<h1>{product.name}</h1>
</>
);
}

#Exponential backoff

Combining query execution with exponential backoff retries ensures applications remain reliable, even when encountering concurrent operation limits.

  1. The graphqlFetchWithRetry function provides error handling and a retry strategy with exponential backoff, ensuring that errors such as concurrent operation limits or temporary rate limiting are retried effectively.

    // lib/graphql.ts
    type GraphQLRequest = {
    endpoint: string;
    token?: string;
    query: string;
    variables?: Record<string, unknown>;
    maxRetries?: number;
    baseDelayMs?: number; // starting delay (e.g., 250ms)
    maxDelayMs?: number; // cap delay (e.g., 5000ms)
    signal?: AbortSignal;
    };
    type GraphQLResponse<T> = {
    data?: T;
    errors?: Array<{ message: string; [key: string]: unknown }>;
    };
    function sleep(ms: number, signal?: AbortSignal): Promise<void> {
    return new Promise((resolve, reject) => {
    if (signal?.aborted) return reject(new DOMException('Aborted', 'AbortError'));
    const timer = setTimeout(resolve, ms);
    signal?.addEventListener('abort', () => {
    clearTimeout(timer);
    reject(new DOMException('Aborted', 'AbortError'));
    });
    });
    }
    // Exponential backoff with jitter and a hard minimum of 1000ms per retry.
    function computeBackoffMs(
    attempt: number,
    baseDelayMs: number,
    maxDelayMs: number,
    minimumMs = 1000
    ): number {
    const exp = Math.max(minimumMs, baseDelayMs * Math.pow(2, attempt));
    const jitter = Math.floor(Math.random() * Math.min(250, exp));
    return Math.min(exp + jitter, maxDelayMs);
    }
    function containsConcurrencyError(errors?: Array<{ message: string }>): boolean {
    if (!errors) return false;
    return errors.some(
    (e) =>
    typeof e.message === 'string' &&
    e.message.toLowerCase().includes('concurrent operations limit exceeded')
    );
    }
    export async function graphqlFetchWithRetry<T = unknown>({
    endpoint,
    token,
    query,
    variables,
    maxRetries = 5,
    baseDelayMs = 250,
    maxDelayMs = 5000,
    signal,
    }: GraphQLRequest): Promise<T> {
    let lastError: unknown;
    for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
    const res = await fetch(endpoint, {
    method: 'POST',
    headers: {
    'content-type': 'application/json',
    ...(token ? { authorization: `Bearer ${token}` } : {}),
    },
    body: JSON.stringify({ query, variables }),
    signal,
    cache: 'no-store',
    });
    const is429 = res.status === 429;
    if (res.ok) {
    const json = (await res.json()) as GraphQLResponse<T>;
    const concurrencyHit = containsConcurrencyError(json.errors);
    if (!concurrencyHit) {
    if (json.errors) {
    const messages = json.errors.map((e) => e.message).join('; ');
    throw new Error(`GraphQL error: ${messages}`);
    }
    return json.data as T;
    }
    // Concurrency limit via GraphQL errors – backoff and retry
    if (attempt < maxRetries) {
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    throw new Error('Exceeded retries due to concurrent operations limit.');
    }
    // HTTP 429 – backoff and retry (minimum 1s)
    if (is429 && attempt < maxRetries) {
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    // Other non-OK statuses – surface
    const body = await res.text().catch(() => '');
    throw new Error(`HTTP ${res.status}: ${body || res.statusText}`);
    } catch (err) {
    lastError = err;
    const isAbort = err instanceof DOMException && err.name === 'AbortError';
    if (isAbort) throw err;
    if (attempt < maxRetries) {
    // Network or transient error – backoff with at least 1s
    const delay = computeBackoffMs(attempt, baseDelayMs, maxDelayMs, 1000);
    await sleep(delay, signal);
    continue;
    }
    break;
    }
    }
    throw lastError instanceof Error ? lastError : new Error('Request failed.');
    }
  2. This example demonstrates how to execute a GraphQL query using the graphqlFetchWithRetry function defined in the previous step.

    // app/api/products/route.ts
    import { NextResponse } from 'next/server';
    import { graphqlFetchWithRetry } from '@/lib/graphql';
    const GRAPHQL_ENDPOINT = process.env.GRAPHQL_ENDPOINT!;
    const GRAPHQL_TOKEN = process.env.GRAPHQL_TOKEN!; // Store server-side only
    export async function GET() {
    try {
    const query = `
    query Products($first: Int!) {
    products(first: $first) {
    id
    name
    }
    }
    `;
    const data = await graphqlFetchWithRetry<{
    products: Array<{ id: string; name: string }>;
    }>({
    endpoint: GRAPHQL_ENDPOINT,
    token: GRAPHQL_TOKEN,
    query,
    variables: { first: 10 },
    maxRetries: 5,
    baseDelayMs: 300,
    maxDelayMs: 5000,
    });
    return NextResponse.json({ ok: true, data });
    } catch (error) {
    return NextResponse.json(
    { ok: false, message: (error as Error).message },
    { status: 429 }
    );
    }
    }

#Gatsby

#Concurrency override

You can use queryConcurrency with our official Gatsby source plugin for Hygraph projects.

This key indicates the number of promises ran at once when executing queries. Its default value is set to 10.

#Throttling

The following Gatsby example uses pThrottle, and allows you to fetch 1 concurrent request maximum, with a minimum delay of 0.5 seconds.

import { createHttpLink } from "apollo-link-http";
import pThrottle from "p-throttle";
// Throttle fetches to max 1 concurrent request and
// min. delay of 0.5 seconds.
const throttledFetch = pThrottle( (...args) => {
return fetch(...args);
}, 1, 500);
const link = createHttpLink({ uri: "/graphql" fetch: throttledFetch });

#Nuxt

#Thread limiting

You can add the following to your nuxt.config.js to avoid getting a 429 error. It will stop GraphQL requests from overloading Hygraph's API limits when building.

// Your Nuxt config file (nuxt.config.js)
generate: {
concurrency: 250, //maximum number of requests per thread. This will only build 250 at a time based on the api rate limit
interval: 200, //delay by 0.2s. You can adjust this to be higher if you still run into issues
},