

At a minimum this is adding the number of instances that federate a given content streams to the multiple of storage needed to host the content, even if that storage is ephemeral. Not so big a problem at 100,000 users, but at 100,000,000 users this is a lot of storage cost we are talking about. Unless somehow the user/client doesnt cache the content they pull from an instance locally on their device when they view it?
Worry more about the bandwidth. Your instance would have to serve your content to all these 100M users. The way it is, much of the load goes to the instance where a user is registered. That means that an instance can control hosting costs by closing registrations.
My point was this isn’t an issue when all content is self-hosted, because the author as the host can edit, delete, or migrate all they want and maintain full direct control over the source of that content the client interacts with whenever a pull request comes in. Yes the user Caches the content when they read it, but there is no intermediary copy.
There’s the fundamental problem. What you think of as “your” data, other people think of as “their” data. That can’t be resolved. What’s worse is that controlling “your” data requires controlling other people’s computers and devices, as with DRM.
Many things are fundamentally feasible. I see 2 things you argue for.
One is changing the caching strategy. I don’t think that’s wise in terms of load sharing, but certainly feasible on a small scale. In certain circumstances, it may be preferred.
The other thing is using older protocols and standards. The practical reason to do this would be to use existing tooling, libraries, code. I’m not seeing such opportunities. I’m not that familiar with these, but it seems like they would have to be extended anyway. So I don’t really see the point.