The cloud is just someone else’s computer. And that computer is busy printing AI videos of the President pooping out of a fighter jet, so now your files are inaccessible
Can you imagine this sentence 1 year ago much less 5 years ago?
The President of course being a convicted felon and rapist, Donald J Trump.
That’s convicted felon, rapist and pedophile, Donald J Trump, to you, mr. Twopi.
One year ago? Easily.
Five years ago? Depends on whether I was visiting 4chan at the moment.
6 months ago, I would be surprised to hear this was done by the president’s administration.
If you properly divide your instances between providers and regions and use load balancing which uses a corum of 3 availability model then it can be zero downtime pretty fairly guaranteed.
People be cheap and easy tho, so 🤷♂️
Yup. And I think I’ll add:
What do you mean we’ve blown our yearly budget in the first month.
Screw the compute budget, the tripled team size without shipping any more features is a bigger problem here.
I’ve seen the opposite. “Oh, you moved your app to the cloud and rebuilt it to be full cicd and self healing? Cool. Your team of 15 is now 3.”
I’m not sure if you are referring to the same thread.
I’m talking about the effort to build multi region and multi cloud applications, which is incredibly difficult to pull off well. And presents seemingly endless challenges.
Not the effort to move to the cloud.
Dividing between providers is not what people would be doing if the resilience of cloud services were as is being memed about.
Doing so is phenomenally expensive.
Doing so is phenomenally expensive.
It’s demonstrably little more expensive than running more instances on the same provider. I only say -little- because there is a marginal administrative overhead.
Only if you engineered your stack using vendor neutral tools, which is not what each cloud provider encourages you to do.
Then the adminstrative overhead of multi-cloud gets phenomenally painful.
This is why OpenTofu exists.
Yeah, Terraform or it’s FOSS fork would be ideal, but many of these infrastructures are setup by devs, using the “immediately in front of them” tools that each cloud presents. Decoupling everything back to neutral is the same nightmare as migrating any stack to any other stack.
Infrastructure is there to be used by apps/services. It doesn’t matter how it’s created if infrastructure across providers does not provide same API. You can’t use GCP storage SDK to call AWS s3. Even if API would be same, nothing guarantees consistent behavior. Just like JPA provides API but implementations and DBs behavior are inconsistent
Also requires AWS to do the same thing which they sometimes don’t …
“But we have our load balacing with 3 different AWS buckets!!!”
I remember SLAs including ‘five nines’ ensurances. That meant 99.999% uptime or an allowance of 26 seconds of downtime a month. That would be unheard of nowadays because no cloud provider can ensure that they will have that uptime.
Amazon has so much redundancy built into EC2 that I genuinely thought they’d be able to avoid this.
Hardware? Yes
Network misconfiguration? Welll…You should plan for your app to be multi regional if it needs to be up that much.
That’s what I’m saying. I took an amazon class last summer and that seemed to be the base.
I blame the customers being cheap or app teams being dumb not Amazon if apps are still down after a few hours of regional downtime.
I may be mistaken, but I really could’ve sworn that a lot of the really strict SLA guarantees Amazon gives assume you are doing things across availability zones and/or regions. Like they’re saying “we guarantee 99.999% of uptime across regions” sort of thing. Take this with a grain of salt, it’s something I only half remember from a long time ago.
Remember when the Internet was supposed to be decentralised for resilience?
No, sorry, I’m not that old :P
Remember: you’re never too young to have a Vietnam flashback!








