Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s only “insane costs” if you don’t know what you’re doing.


Or need a good amount of ram. Which should be really cheap these days.


My life on AWS the last five or so years really would have been a lot simpler if every new generation of EC2 servers didn't have the exact same ratio of RAM to cores.


At this point the memory:vcpu ratio is the defining characteristic of main general purpose C/M/R series, I'd think it would be pretty disrupting to change that significantly anymore. And they got also the special extra-high memory X series available. I would say ec2 is pretty flexible in this regard, you have options for 2/4/8/16/32 gigabytes per vcpu. It's mostly problem if you need even less memory than what C series provide, or need some special features.


As products age they tend to use more memory. Add in space/time tradeoffs asking to use more. You either get stuck applying the brakes trying to keep the memory creep at bay, or you give in and jump to 2x the memory pool which will disappear too.

The old solution in on-prem was to populate machines with 2/3 to 3/4 of their max addressable memory and push back on the expensive upgrade as long as possible, or at least until memory prices came down for the most expensive modules. Then faster hard drives or new boxes are the next step.


RAM in cloud is expensive because it's the only thing still not possible to over-provision performantly afaik.


and even if you do, it’s usually a system design problem that you’re maintaining

on one hand, I can see how this is an unfalsifiable standard, on the other hand I can see the utility of solving a friction for people that messed up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: