Skip to content

Batching & Caching in GraphQL

Optimizing performance is essential in GraphQL applications. Two key strategies to improve efficiency are batching and caching. These techniques help reduce redundant data fetching, minimize latency, and scale better under load.


Batching

Batching combines multiple operations into a single request or groups multiple resource calls into one function call (typically in resolvers).


1. Request Batching (Client-side)

Some clients (e.g., Apollo Client, Relay) support batching multiple GraphQL operations into a single HTTP request.

Example:

POST /graphql
[
  { "query": "{ user { id name } }" },
  { "query": "{ posts { title } }" }
]

Benefit: Reduces number of HTTP requests.

⚠️ Requires server support for parsing batched JSON arrays.


2. Resolver Batching with DataLoader

On the server, batching is often done using tools like DataLoader to group and cache resolver calls.

Without batching (N+1 problem):

query {
  posts {
    title
    author {
      name
    }
  }
}

If there are 10 posts, this can trigger 1 query for posts and 10 separate queries for authors.

With DataLoader:

const authorLoader = new DataLoader(async (authorIds) => {
  const authors = await db.authors.find({ id: { $in: authorIds } });
  return authorIds.map((id) => authors.find((a) => a.id === id));
});

Now only 2 DB calls are made: one for posts, one for authors.


Caching

Caching reduces load and latency by reusing previously fetched or computed data.

1. Client-side Caching

GraphQL clients like Apollo automatically cache queries.

Features:

  • Normalized cache
  • Cache-first / network-only fetch policies
  • Automatic UI updates
const { data } = useQuery(GET_USER, {
  fetchPolicy: "cache-first",
});

2. Server-side Caching

Can be implemented at various levels:

a. Query Result Caching

Store full response for identical queries.

// Apollo Server example
const server = new ApolloServer({
  cache: new InMemoryLRUCache(),
  plugins: [responseCachePlugin()],
});

b. Field-level Caching

Cache individual resolver outputs, often using tools like Redis or memoization.

const cachedResolver = async (parent, args, context) => {
  const key = `user:${args.id}`;
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const user = await db.getUser(args.id);
  await redis.set(key, JSON.stringify(user), "EX", 3600);
  return user;
};

c. Persisted Queries

Use query hashes to cache pre-approved query responses, reducing parsing overhead and attack surface.


Best Practices

  • Use DataLoader or equivalent tools for resolver-level batching.
  • Avoid caching sensitive or user-specific data unless scoped correctly.
  • Configure proper TTL (time-to-live) for server-side caches.
  • Use Apollo’s fetch policies smartly to manage client-side cache.
  • Enable persisted queries in public APIs to reduce request payload size.

Tools & Libraries


Summary

Technique Scope Benefit
Request Batching Client → API Fewer network requests
Resolver Batching API → DB Solves N+1 query problem
Client-side Caching Client Faster UI, reduced traffic
Server-side Caching API Layer Reduces computation, latency

Applying batching and caching effectively leads to faster, more scalable GraphQL APIs.