For a user:
Some stats:
You want good SEO for high rankings. There is an image shared about how horrific the drop-off is after the first three.
The Core Web Vitals were blended into the page ranking from Google. You need to be fast to get to the top site.
Online advertising is important for anyone involved with ecommerce sites. If your site is bad, you can lose the value of having users click onto your website.
Here is a good website for performance correlation to money https://wpostats.com/
This section covers the importance of diving into waterfall charts.
They measure time, but there are multiple streams in the data point.
In general, the order of the information it tells you:
The charts also break down into categories:
These are considered "legacy" performance metrics in the sense that they are no longer used as the primary metrics for web performance.
This covers:
After client-side rendering came in 2010, applications changes their structure.
Because of it, it meant that the app could be initialised straight away and then the load event happens right away (before the page had actually loaded).
There was no longer any meaningful connection between these metrics.
What it measures:
This translates to the following metrics:
How fast your site visibly loads the most important element (according to Google).
All we care about:
What counts?
Entropy is counted as the number of bits per visible pixel. If we have an unencoded 3.9MB image (~31 million bytes) and we render it on 2800x1200 (3,360,000 pixels), the entropy is calculated as 31M / 3M ~= 9.93. This image would qualify for LCP.
If you placeholder is low LCP, then Google will wait for the image to load.
This code can find all the images on the pages and calculate the entropy for you:
console.table( [ ... document. images].map((img) => { const entry = performance.getEntriesByName(img.currentSrc)[0]; const bytes = (entry?. encodedBodySize * 8); const pixels = (img-width * img.height); return f src: img.currentSrc, bytes, pixels, entropy: (bytes / pixels) }; });
Things to note:
How smooth and predictably elements load into the page.
An example could be ads loading into websites (which is an anti-pattern).
https://shifty.site
The calculation show is:
Impact Fraction * Distance Fraction = Layout shift value
For the example, we had a banner shift everything down:
The viewport height is 768px, and the impact size is 708px, so the calculation for impact fraction is 708/768 = 0.922. The distance was the promo banner height (180px) so the distance fraction is 180px / 768px = 0.234.
So the score is 0.922 * 0.234 = 0.215.
There was another score calculated for mobile as well.
The CLS final score is the sum of all of these scores.
What's not included:
How bad can it be? You want less than 0.1.
This is important to under INP, the final metric.
The flame chart measures tasks over time.
It covers a few actions:
How quickly can users interact.
What's an interaction?
The main tactic here is yielding the main thread by deferring execution (using promises, etc).
Considerations:
The metrics you want:
This is a recent change, but it's no longer used for the core web vitals (effectively retired).
Measured by the first input delay. This wanted to check if you were sending a lot of JavaScript down the page.
It emphasised blocking time over processing time, which wasn't a great fit for core web vitals.
It's a metric on how quickly you host responds. Has nothing to do with your code but your server.
TTFB is when the first byte is received.
This will have an impact of LCP, but isn't directly impacting things like SEO.
It's the first time we go from white to showing something.
Not impactful to core web vitals but still useful for understanding LCP.
We can capture all of these programmatically.
https://developer.mozilla.org/en-US/docs/Web/API/Performance
You'll likely use a tool built for this, but if you wanted to capture it yourself, you could.
The covered the following:
performance.now()
performance.getEntries()
-- this has a lot of information available to youThe problem with the Performance API is that you slow down the metrics by trying to measure them.
The performance observer differs by delaying the work until things are idle.
This is also good because you can also specify the type of property you want to observe.
Google has it's own package web-vitals
that let's you subscribe to web vitals.
Browser engines:
Blink
Webkit
Gecko
Webkit doesn't support LCP, CLS, INP
Gecko doesn't support CLS, INP
Where do we measure from?
You want to get as much field data as possible to get an accurate representation for users visiting the site.
To sum up, lab data is diagnostic, while field data is experience.
At the time of the course, a list of some of the tools available to you:
The is also a point made about popping out the dev tools panel to not impact locally performance testing.
In the Lighthouse score example, the following is done to try to emulate a real user experience:
In the example, the score was so bad that there wasn't even a score set.
Something very useful about this, is that you can view the trace for the waterfall chart and flame chart for how things ran.
The network tab was also demonstrated.
A useful extension that also logs a bunch of useful information.
It also shows field data.
This is the CruX data set. It only comes from logged in Google users.
There is also another website where you can request these metrics https://requestmetrics.com/resources/tools/crux/
Another one is https://pagespeed.web.dev/
There is no INP scores for the synthetic tests since there is no user taking any action.
This one is not Google affiliated, but it gives you much more control over running synthetic tests.
This also gives you an interactive waterfall for more information.
RUM tooling comes in for collecting this information.
RUM gives the following benefits:
There are a lot of theses tools. DataDog, Sentry, Request Metrics, etc.
It's said that Akamai is the goto RUM tool that enterprise companies use. They built Boomerang
Fast is subjective by your users. It's about perceived performance.
This comes back to the psychology of waiting and how long people are willing to wait for different things.
Things to remember:
A company like TurboTax introduced an intentional slow down as it is perceived as more trusting.
It's about find business metrics to relate back to performance metrics.
User experience:
You need to relate this back to your performance metrics. Always remember that correlation is not always causation.
Weber's law also states that there needs to be a 20% difference for users to notice it. So focus on the right things.
For this, the browser will instruct what it can support and then it's up to us on the server to serve something supported. Gzip is normally better for smaller files, so you should make a decision based on file size whether to compress and what to compress with.
In the demo, it was a compression middleware that enabled this.
This compares the difference between HTTP and HTTP/2 and HTTP/3.
There is a small chat about TCP vs UDP here too.
Drawbacks of HTTP/3:
For the demos, he used https://caddyserver.com/
The rule here is to use the right-size for your workload.
To demonstrate this, the server was initially throttled to be be 1s, but it was reduced to 50ms to show within the waterfall how long the request took to respond.
Finally, it was about moving the host closer to the user.
A website added to help with figuring out the impact of proximity https://wondernetwork.com/
The solution spoke about the user of using CDNs to get data closer to the user.
Make sure this is a problem first.
You swiss army knife for resolving this:
For sequence chains, that could be resources requiring more resources. CSS and fonts are render blocking and will prevent the page from rendering until they are completed.
In the example, it's demonstrated using the CSS @import
syntax and the @font-face
syntax.
If one request starts after another, you could guess that it's chained.
The solution is normally to use bundling. The example they use is to use https://lightningcss.dev/
This is about starting the critical path resources as fast as we can.
Within our <link/>
tags we can add rel=preload
to instruct the browser to load it before we know that we need it.
We can preload the following:
Interestingly, he mentions that you should just host the fonts yourself to pull them so that the URL can be used.
JavaScript is Parser Blocking. It is blocking and prevents other files from being parsed.
We can use defer
to help with that. There is also async
.
In most cases, you will want to use defer
.
If you use type=module
, then it is always deferred.
By moving FCP and TTFB, we also improve LCP.
The LCP is usually a resource, so we fixed a lot of the "resource delay", but we need resource duration and then (almost not a problem) is the render delay.
LCP image, we need asap. The others maybe not so much.
For resource delay, the image/iFrame tag can use loading="lazy"
.
Hypothetically, the hero image would be the LCP, so the loading for lazy was removed from that.
fetchpriority="high"
can be used to critical paths. The hero images had the signal added to it to help the browser know what priority is.
This works only with Chrome for now, so the alternative is to use preload
for the resource with the <link/>
tags.
This section covers some other possible image formats. It didn't go too deep.
Some options: JPG, PNG, WebP, AVIF.
The difference between WebP and AVIF is minimal, so it's best to use whatever is easiest.
If you need to use PNG, there is also another site that was brought up https://tinypng.com/
Sometimes we want hi-resolution, sharp images. But a lot of times, we might want to ship different sized images based on the window port.
In the demo, the <picture>
tag was demonstrated with different source tags added as well for different sizes.
This section worked through resizing some images. Initially using different libraries like jimp
and imagemin
.
Afterwards, an example using the picture
tag was made.
There was also a reference to svgomg website for optimising it, but they're generally not the problem.
This section covers using caching headers to help understand whether we need to fetch an image or not. Properties such as ETag can help us with 304 not modified responses.
In addition to this, we can use cache control locally on the device.
The one thing you need to do: give size hints to how big an image is going to be.
There was a note that you should not add px
for the numbers.
Reminder, this is how quickly users can interact with your website.
There is only one solution to this as well (at lest 99% of the time): yielding the main thread.
A couple of ways we can solve this:
setTimeout
to schedule the code in the futurerequestAnimationFrame
let's you schedule code to run just before the next paint