

NM, I had it in my head that absolute zero is -253.15, but it’s -273.15


NM, I had it in my head that absolute zero is -253.15, but it’s -273.15


Did you read my entire comment? I know it’s more than one sentence, but your entire comment would be irrelevant if you read the whole thing.


How many R’s in strawberry?


That would break physics (assuming you’re using Celsius)


Can you name a more reliable alternative?
Stop using hyperscalers. Then when an outage does occur, it doesn’t take down half the internet, and instead only affects a much smaller subset of services.


How many people in your city know what self-hosting even is, though?
WAAAAAY more than you’re giving credit for
I’ve decided people need to learn the hard way
Bold of you to assume people will learn. Didn’t you hear about that couple whose kids died from measles and they said afterwards that they still feel their decision to not vaccinate the kid was right.
I read your comment. You basically repeated back what I said.
As for “not actually anything extra reliability”, that’s not true. This is literally the definition of all your eggs in one basket. If all these services were instead spread out amongst smaller providers, there wouldn’t have even been any news about it because it would have affected just a few services. But instead half the internet went down.
Even one of the applications I manage was down because of a single RTE npm dependency used on the forms. This is when we discovered that the npm module wasn’t bundling the whole thing but in fact dynamically pulling the js from a CDN hosted on AWS, because our prod instances kept erroring out for everyone (No, I did not write this application and I’m already replacing the dependency).
The argument isn’t about spending thousands for a lateral shift in reliability, the argument is to decouple everything from a single failure point.