Guillermo Rauch Vercel CEO interview

Interview of Guillermo Rauch on OSS, edge, cloud, latency.


Podcast Notes

How long things take from a physical point of view in the web? Numbers every frontend developer should know - Latency

Web page load times and responsiveness to user action in web apps is a primary driver of user satisfaction–and both are often dominated by network latency.

While latency numbers might seem low by themselves, they compount quickly, creating a waterfall effect. React Server Components can help reduce these waterfal effects.

A package from one side of the world to the other takes 150ms.

How much time passes between hover on a button and actual click?

300ms, susprise, that's a lot. Going back to the packet roundtrip, hovering and clicking takes 3x the amount of time. that's a lot. So many things could be done in this time span.

  • When the website gets slower, the webpage starts to die.

  • Everyone is on us-east1. Everything feels faster there. Closer to the cloud. US-EAST1 is the "center" of the planet mileage wise to Europe and US-WEST. That can be the reason why everyone deploys there.

  • Offering a consistent experience with Client Side Rendering increases complexity the further away to us-east1 the user gets due to the rouund trip of packages.

Edge vs Data Center

  • 150ms for a packet to go from US-EAST to Europe (worst case). Should be fixed with edge rendering. Problem? Data needs to be at the edge, if it is not, response time increases as data needs to be fetched. Best experience was to render straight were the data were and stream as the data and compute are happening.

PPR (Partial Pre rendering)

Allows to offer user accessing from Madrid (region is us-east for the app). A very fast experience within the first 10ms. This is achieved by, at buildtime, generating an stencil of the page by generating what it would look like if the dynamic data (and components) were not there (pre render). Even before the server cmoponents that do static data fetching are hydrated. Like a layout with holes that would be filled as data is fetched.

This part could be done at the edge without affecting response time as, at this point, no data needs to be fetched (on pre render), then it would connect to where the data is an fill the wholes. In this fashion, the user has something tangible that gets filled over time when data is fetched. Critial distinction here is that PPR does not run any compute, it is generated at build time (static HTML). Dynamic stream is then concatenated to this static payload. Client does not load any JS as it is all HTML, it is the edge proxy that asks for the data to be fetched and then that data is streamed.

Atm at build time, next would do a pass and try to generate all pages of the app on PPR if the flag (experimental) is turned on.

PPR works with the suspense boundaries, also acting as a linter on those. Rendering hte skeleton where the Suspense boundaries are.

So the whole layout would be PPR at the edge (static page load) and a dynamic SSR stream where the database lives.

[LEFT AT MIN 35]

Links