The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
When your approach is "I don't care because I have more important things to focus on", you never care. There's always something you can do that's more important to a company than optimising the page load to align with the TCP window size used to access your server.
This is why almost all applications and websites are slow and terrible these days.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
To add to this, bloated performance is often 'death by a 1000 cuts' - ie there isn't just one thing that makes it slow, but it's the cumulative combination of many individual choices - where each choice doesn't incrementally make that much difference, but the cumulative effect does.
ie if you have 100 code changes, each one adding 'just' a 10 millis - suddenly you are a second slower - and yet fixing any one problem has a minimal effect.
> This is why almost all applications and websites are slow and terrible these days.
The actual reason is almost always some business bullshit. Advertising trackers, analytics etc. No amount of trying to shave kilobytes off a response can save you if your boss demands you integrate code from a hundred “data partners” and auto play a marketing video.
Blaming bad web performance on programmers not going for the last 1% of optimization is like blaming climate change on Starbucks not using paper straws. More about virtue signaling than addressing the actual problem.
SPAs are great for highly interactive pages. Something like a mail client. It's fine if it takes 2-3 seconds extra when opening the SPA, it's much more important to have instant feedback when navigating.
SPAs are really bad for mostly static websites. News sites, documentation, blogs.
Well, half of a second is a small difference. So yeah, there will probably be better things to work on up to the point when you have people working exclusively on your site.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
> Most of us are aiming for under a half second total for response times.
I know people working on that exist. "Most of us" is absolutely not, if they were so many, the web wouldn't be like it's now.
Anyway, most people working towards instantaneous response aren't optimizing the very-high latency case where the article may eventually get a 0.5s slowdown. And almost nobody gets to the extremely low-gain kinds of optimizations there.
"More than 10 years ago, Amazon found that every 100ms of latency cost them 1% in sales. In 2006, Google found an extra .5 seconds in search page generation time dropped traffic by 20%."
Battery and frame rate. There are a couple SteamOS boxes that have windows variants and the windows version is bad. Microsoft has been trying to manage the PR on this, last I heard they claimed the next update would have 2 GB more available memory.
2GB is not good news. That’s evidence that they did not give two shits about mobile before the bad press.
The decision to use react for the start menu wasnt out of competency. The guy said on twitter that thats what he knew so he used it [1]. Didnt think twice. Head empty no thoughts
It is indeed an impressive feat of engineering to make the start menu take several seconds to launch in the age of 5 GHz many-core CPUs, unlimited RAM, and multi-GByte/s SSDs. As an added bonus, I now have to re-boot every couple of days or the search function stops working completely.
I googled the names of the people holding the talk and they're both employed by Microsoft as software engineers, I don't see any reason to doubt what they're presenting. Not the whole start menu is React Native, but parts are.
That tweet is fake, and as repeatedly stated by Microsoft engineers, the start menu is written in C# of course, the only part using React native is a promotion widget within the start menu. While even that is a strange move, all the rest is just FUD spread via social media.
It is funny how quickly this became normalized. During Vista time everyone was absolutely shitting on awful performance, now that PCs became faster it is apparently fine to use one dog ass slow managed language (C#) over another (JS with RN).
Even though RN for Windows is just a thin wrapper over WinRT, but who gives a shit, right? Because JSLOLLOLOL.
Doesn’t have to be a choice it could just be the default. My billion cells/checkboxes[1] demos both use datastar and so are just over 10kb. It can make a big difference on mobile networks and 3G. I did my own tests and being over 14kb often meant an extra 3s load time on bad connections. The nice thing is I got this for free because the datastar maintainer cares about tcp slow star even though I might not.
a corporation large enough to have a team of experienced SREs that know which details to obsess over will also have enough promotion-hungry POs and middle managers to tell them devs to add 50MB of ads and trackers in the web page. Maybe another 100MB for an LLM wrapper too.
I don’t see what size of corporation has to do with performance or optimization. Almost never do I see larger businesses doing anything to execute more quickly online.
Too many cooks spoil the broth. If you got multiple people pushing agenda to use their favorite new JS framework, disregarding simplicity in order to chase some imaginary goal or hip thing to bolster their CV, it's not gonna end well.
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
You mean creating a different container that is exactly equal to the previous one?
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
It's useful if you want to bring up a containerized service, optionally update OS, run tests, and if everything is good, copy that instance a bunch of times rather than starting fresh.
It gets you scale out a batch of VMs remarkably quickly, while leaving the original available for os/patch updates.
If I'm willing to pay the cost of keeping an idle VM around, subsequent launches are probably an order of magnitude faster than docker hello-world.
Your cloud provider may be doing it for you. Ops informed me one day that AWS was pushing out a critical security update to their host OS. So of course I asked if that meant I needed to redeploy our cluster, and they responded no, and in fact they had already pushed it.
Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.
None of this is making live forking a container desirable to me, I'm not a cloud hosting company (and if I was, I'd be happy to provide a VPS as a VM rather than a container)
For the VM case, I'm sure I might have benefited from it, if Digital Ocean have been able to patch something live without restarting my VPS. Great. Nothing I need to care about, so I have never cared about live forking a VM. It hasn't come up in my use of VMs.
It's not a feature I miss in containers, is what I'm saying.
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
I have wasted enough time caressing Linux servers to accommodate for different PHP versions that I know what good containers can do. An application gets tested, built, and bundled with all its system dependencies, in the CI; then pushed to the registry, deployed to the server. All automatic. Zero downtime. No manual software installation on the server. No server update downtimes. No subtle environment mismatches. No forgotten dependencies.
I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.
Part of why I adopted containers fairly early was inspired by the time we decided to make VMs for QA with our software on it. They kept fucking up installs and reporting ghost bugs that were caused by a bad install or running an older version and claiming the bugs we fixed weren’t fixed.
Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
Docker is just making all the same promises we were made in 1991 that never came to fruition. Preemptive multitasking OSes with virtual memory were suppose to solve all of our noisy neighbor problems.