Refactoring an entire NextJs application to server components

Introduction

Next.js 13 introduced the app directory and React Server Components. On Star Wars day (May 4th) a couple months later, React Server Components was marked as stable.

Server components separate page rendering, with some parts rendered on the server and others on the client. The key difference is that server components are always rendered on the server, not prerendered on the server and hydrated on the client. Server-side rendering existed before, but now for the first time it’s possible to mix it with client-side rendering on the same page.

I was intrigued by these changes and spent 3 days migrating the product I’m working on, https://looking-glass.app.

Before I started the client rendered everything, every layout and page had the use client directive on top.

The goal I set myself was to move as much rendering to the server as I can. Meaning client components should be as far down the tree as possible.

User experience

In the early days of the internet, all pages were rendered on the server. The (simplified) flow was

  • Navigation start
  • Server requests data
  • Server renders html
  • Browser displays page with data
  • Navigation finish

The rise of single-page applications changed this.

  • Navigation start
  • SPA changes DOM to show new page
  • Navigation finish
  • SPA calls API and shows loading state
  • Server responds with data
  • Browser renders data

With SPAs navigation is instant, but retrieving data for display on the new page takes time. Most applications show a loading state while the API calls are ongoing.
With pages rendered on the server navigation might take longer but the page is complete, no need to load additional data.

Server components make it possible to get the best of both worlds. For every element on a page, developers can choose the pattern that best fits the needs.

It is possible to show loading states with server components as well. I opted to not do that because subjectively the application feels snappier without loading states.

There is a cutoff though. If rendering the new page takes too long, users might question if it’s still happening. In my case it was possible to keep the time to 200-300ms, which is fast enough that a loading state is superfluous. There were a couple performance improvements that needed to happen to get to that state though.

Performance

Initially the product performed terribly with server components. Every server-side render took multiple seconds to complete. I attribute this to mistakes I made while refactoring, not to a flaw in server components. With one additional day of investigation and performance improvements it was at an acceptable level.

performance before

This is the timing pattern most requests had before the improvements. Content download takes so long because the server starts streaming the response immediately, while rendering of components happens. Rendering includes any async data fetching as well, the more there is, the longer it takes.

After improvements, the typical pattern looked like this. From 1.2s down to 0.2s, a reduction of 80%!

performance after

Improvement: Combining data requests

At first some server components fetched data themselves. The component that displays library content is an example. Here’s how it looks.

component

Besides content data, which is passed as prop, it needs the icon url and a list of all tags. The component fetched this additional data itself.

With the standard page size being 20, it resulted in 40 requests for each page (icon url and tags). As you can imagine, this was the cause of a big chunk of the request duration. After moving these database calls to the page and combining them into one, render time was reduced by ~700ms.

Improvement: Caching

When multiple components require the same data, caching can substantially decrease processing times. In my case user data is used by the middleware, layouts, and pages. Without caching it would be fetched 3 times.

fetch is extended to memoize calls by default.

Next.js extends the native fetch Web API to allow you to configure the caching and revalidating behavior for each fetch request on the server. React extends fetch to automatically memoize fetch requests while rendering a React component tree.

Async code that doesn’t use fetch, database calls for example, need custom caching code.

With React’s cache it’s possible to cache data for the same request. It takes a function and returns a cached function. During the request, if the function is called multiple times with the same parameters, it will execute only once and always return the result of the first execution.

Unfortunately at the time of writing it’s only available in the canary and experimental release channels of React.

With NextJs’ unstable_cache it’s possible to reuse the results of expensive operations across multiple requests. This also means caching across users, which creates opportunities for even greater performance improvements but also introduces the risk of accidentally exposing data to the wrong user.

The best way to avoid that risk is to pass in the user id to the cached function. unstable_cache creates a key to store cached data with. This key includes the function itself, the key parts passed as a second parameter, and the arguments given to the function.

const joinedKey =
  `${cb.toString()}-${Array.isArray(keyParts) && keyParts.join(',')}
  -${JSON.stringify(args)}`

Note that unstable_cache only works for JSON values. Whatever data is cached will be returned in JSON format. Date objects and any other that don’t have a direct representation, are returned as strings.

// TODO: handle non-JSON values?
body: JSON.stringify(result),

This is unexpected because the resulting function type is the same as the callback provided.

export function unstable_cache<T extends Callback>(
  cb: T, // The callback type
  keyParts?: string[],
  options: {
	revalidate?: number | false
	tags?: string[]
  } = {}
): T { // Returns the same callback type, indicating the the return value is the same as well
// …
}

It’s still unstable so I expect there will be improvements, but something to be aware of.

In my case it was useful to cache user data for the request, but not longer. I used cache and shaved off another 100ms.

Parallelizing data requests

Pages often make more than one async call. If they’re not dependent on each other it’s possible to parallelize them.

For example changing

const sources = await getSourcesOverview(cookieStorage);
const inbox = await getInbox(cookieStorage, sourceId, page, 20, sortProperty, sortDirection);
const userData = await getUserData(cookieStorage);

into

const [sources, inbox, userData] = await Promise.all([
  getSourcesOverview(cookieStorage),
  getInbox(cookieStorage, sourceId, page, 20, sortProperty, sortDirection),
  getUserData(cookieStorage)
];);

In my case it didn’t affect performance at all. Which leads me to believe that the database provider might use only one connection. It’s an area for further investigation.

Refactoring process

The strategy I chose was to refactor from the outside in. Starting with layouts, continuing with pages, and then components further and further down the tree. This approach gradually led to more and more content being rendered on the server.

The first step of every layout, page, or component refactoring was to convert it to a server component, removing ”use client” and marking it async.

To handle API calls the application uses React Query. Initially every GET API call became one awaited function call. The improvements mentioned in Performance were left for later.

After that only event handling code was left. Functions that change data based on user actions or update stale data. Moving them into a new client component to encapsulate these actions was enough to remove them from the current page or component. This added a bunch of new files to the application, which was cleaned up as the last step of the refactoring.

During the cleanup multiple components with similar purposes were combined into one. For example, the TagButton has a callback prop which is executed when a tag is selected. Multiple entities can be tagged, so the tag endpoint to call is context-specific. At first there were multiple wrappers based on the entities that are tagged, ContentEntryTagButton and ContentSubscriptionTagButton.

The combined component can either receive a callback or a string defining which entity should be updated.

Interface Props {
  onSelect: "updateUserContent" | "updateContentSubscription" | ((tag: string) => void);
  elementId: string | null;

}

If "updateUserContent" or ”updateContentSubscription” is passed, then the component does the API call. If a function is passed then it calls the function.

I’m sure there are better solutions, but for the size of this application it was the most straightforward one.

Reflections

Server components significantly impact the development of NextJs applications. What I’m writing here won’t even scratch the surface, and over time new ideas will be developed and best practices adapted.

What to render where

One idea I had is that all data loading should happen in server components, and user interaction should be in client components. Then there’s no need for GET API endpoints or serialization of return data anymore.

It was good practice to only return data the UI needs, nothing more. That becomes moot as well. Even if the backend fetches all the data in the world, only the rendered components are returned.

Because of these changes I could remove a lot of code, which always feels nice. And I’m excited about server actions for the same reason.

On the other hand, there’s a big difference in how much data is transferred. Rendered components are larger than raw data. For this reason it might be beneficial to do most rendering on the server except for lists, where raw data is transferred and the same component rendered repeatedly. The raw data plus component code is probably smaller than the rendering result in these cases.

UI updates

Server components are updated by calling router.refresh(). Every server component on the page is rerendered and transferred to the client. The more components there are the larger the response. If it were only server components and one list item out of hundreds is removed, the whole list is transferred again.

It’d be amazing if only the diff were transferred.

A nice consequence of centralizing data retrieval in server components is that following any kind of user action, calling router.refresh() is enough to ensure the UI is up to date. No more granular data invalidation. This of course leads to excessive database usage, but makes development easier. It’s a tradeoff. If at some point in the future we get a way to update only certain server components, we’d have the best of both worlds.

The prerequisite here is that retrieving data and rendering server components is fast enough. If it’s slow, the user experience will suffer.

At the moment I don’t have a way to show a loading state while server components refresh. My application is fast enough that it’s not needed, however I am curious and it is on my list to check out soon.

Note

One thing to keep in mind is that on page reload client components are prerendered on the server. There’s no need to switch to server components if that’s all you want. Component trees like ServerC->ClientC->ServerC can still be fully prerendered on the server. When it hydrates the state of ClientC is used and adapts the UI as required.

Conclusion

Server components are a significant change, and still relatively new. We don’t know all the implications they will have on web application development yet. Big companies need time to develop trust in new technologies and features, and adapting their huge applications takes even longer.

My application is small, and I only just refactored it. Discovering all the benefits and tradeoffs takes time, and I’m looking forward to find out more.

Besides that, the next innovation is already waiting. Server actions was released as stable with NextJs 14, on October 26th.