Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't agree. The complexity isn't wildly different for complicated scenarios. Scenarios falling between the simplest and complicated absolutely benefit from running on a managed Kubernetes. Rolling your own Kubernetes is useful only at scales where you can't wrangle the complexity of self-service operations for developers, and even then it's still very questionable unless your headcount is astounding.

I would have agreed with you three years ago, but today there's a ton of surprisingly good off the shelf solutions, you can get very, very far in a brief amount of time with a single engineer. A ton of the functionality exposed by Kubernetes is absolutely necessary at any scale and very difficult to do yourself. "Don't let any process on this host talk to this port except these ones" is something that is so much easier in K8s than anything else I've ever encountered.



I feel that you’ve kind of talked past my point, though maybe I missed yours.

My point was that unless you’re running on metal, the cloud people built all this shit, and much, much better than k8s (or Tupperware for that matter).

They’ll stick you Docker container inside a VM happily because people pay them to do it, but it’s redundant.

Google didn’t OSS Tensorflow out of the goodness of their hearts, they did it to sell TPU hours on GCP.

k8s is likewise not charitable giving: it’s to sell managed Kubernetes on GCP, and subtle nudges to get people to make it their careers are uh, just business I guess.

Provisioning as code is in a sad state: Chef/Ansible/Salt/friends can’t deliver on idempotency, Nix is too complicated for most. So people use a container primitive (Docker) to work around that.

This has nothing, absolutely nothing, to do with service discover and failover and auto-grow and other Kube shit.

AWS does all that already. Better.

This is a borderline-cynical business strategy on the part of the cloud providers, and judging from the religious fervor around k8s (someone upthread has it in their username for Christ’s sake), it’s working.


https://twitter.com/kelseyhightower/status/85193508753294540...

This rings true every time I see someone talking about K8s. Companies just want something that is customizable to their workflow, they don't want a ready-made PaaS offering. Google's play was to provide an industry standard API interface that is available everywhere (which is essentially what Kubernetes is). Now that the control plane for compute is commoditized, GCP et al can sell complements - BigQuery, TensorFlow, GCS etc.

I think Anthos-like solutions are the future. Just buy the cheapest hardware from your local Datacentre or BareMetal providers like Hetzner and run GCP, AWS or some other flavor of software on top of it. This is also where Teleco Clouds are moving with OpenRAN and 5G, they are tired of being locked-in to full stack vendors like Nokia and Ericsson and want interchangeable solutions compatible through a common interface


Hat tip on the tweet, we just made that a meme in our Phorge instance.

Well done sir or madame.


Sure, there are better primitives within AWS for doing X, like running a containerized service, but k8s is more than that. Coupling to k8s is significantly more portable than a cloud provider's API. I can run the entire stack locally. I can run on GKE and EKS and my metal and your metal.

It's a powerful abstraction and it's very liberating.

I understand the cynicism, don't get me wrong, but that doesn't mean it's all bad. It definitely feels like a net positive to society even if at times it's unbearable.


AWS does it better on AWS. Making your entire business process dependent on AWS APIs is not the right strategy for every company.


Right, and GCP and Azure do it better on GCP and Azure. I've got no attachment to AWS, I just have to use it right now so it was the first one that came to mind.

Somewhere between "I'm small enough that all of the cloud providers are about the same" and "I'm big enough that I can leverage them against each other to get a better deal, but still small enough to not be on metal" there is maybe a sweet spot where I really do need to be mobile across providers in a push-button way. People in e.g. blockchain use k8s to good effect because they want to run a big distributed system with loose-ish central organization, and that's a real thing. It's also a niche.

I never said Kubernetes was useless full stop. I said that places where it's the right call are niche.

My current gig is humble, a usually-raman-profitable type garage band, we're like 5 engineers with Ethernet cables in our living rooms and a bigger AWS bill than feels fair, and if I proposed to any of the ex-FAANG type people involved that we should get serious about Kubernetes they'd laugh themselves pink in the face and resume typing. We can give our machines names and remember them all, there are a few dozen of them, but that's still "pets".

If we're fortunate and hard-working enough to need 1k machines, I'll spin up a ZK instance or something and write a little Nix/bash.

I won't be networking at KubeCon 2023 unless I lose a bet to one of my peers. I've got work to do.


Conversely I recently worked with a five-person company using Kubernetes to great effect with nearly a hundred customers. Being able to just commit a new .yaml to stand up or tear down resources is a real force-multiplier.

I do entirely agree with you, I think: there are too many companies using Kubernetes prematurely, but...

It's kind of turned out okay? Because of this, the ecosystem seems to be improving and it's easier than ever for small shops to get started. And selling software that targets Kubernetes is leagues better than targeting Ubuntu or Red Hat or specific kernels.

> I'll spin up a ZK instance

If I never have to run another ZooKeeper instance again I'll die happy.


I fully agree with you that the complexity isn’t all that different from the typical scenarios.

One of the scenarios I’ve seen Kubernetes replace is something like tying together Packer, Chef, Jenkins, Terraform, And Vault to get an EC2 instance propped up, but that’s just to deploy the code.

What was particularly annoying to me was that the developers were already using containers to build their code and another container for unit testing. So switching to Kubernetes would’ve been far easier. I had to go in and deconstruct the work they were doing with containers to then get their app deployed to an EC2 instance.

Although at this level you’re still not saving a ton by going the Kubernetes route. You get a ton of benefits with it, though. Even just tiny things like being able to get the config from a deployment using kubectl is so good. And then you can modify it, too! That’s been super helpful for troubleshooting.


I can confirm your viewpoint. I'm single engineer building self-managed k8s for my company. We're using OpenStack provider, but I wrote all the necessary scripts and it works just fine. We're starting to migrate to it soon, but all the preliminary tests were fine and I think it's completely manageable. I'm using simple kubeadm setup without any issues. I've read plenty of terrifying articles how k8s is hard but it's actually looks very easy so far.


I hope you take this as an admittedly weird compliment, but my lords and masters give me a few hours a week to code on my own shit and I'm trying to get a language model to talk like HN.

I'm going to try single-shot learning on one of the newer language models with "I can confirm your viewpoint. I'm single engineer building self-managed k8s for my company." because that remark, reasonable or not, so neatly encapsulates the current zeitgeist.

If you're trolling, respect. And if you're not, my 2c: flash an AMI and work on your product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: