Coder, Artist, Blogger (https://fungiverse.wordpress.com/, https://philpapers.org/archive/BINAKR.pdf, https://philpapers.org/rec/BINFPT-3, https://pinocchioverse.org/), former admin of https://diagonlemmy.social/, Programmer of MyceliumWebServer
i’m no longer sure if you’re envisioning a web browser or a website builder. your terminology is all over the place.
I’s blurring the line in-between. It’s trying to set the interaction with the web on a lower level that is closer to the data. It’s like you are live-coding the website you want to use for a specific use-case. But then just call the high-level API-endpoints right away. Basically making the dev-tools and the dev-console of browsers the main way to interact with the web (which assumes a web that is build in a similar fashion).
and no, the semantic web is in no way an an open, global codebase. it’s just a way of structuring html. i know berners-lee wanted the web to be more like what you are describing but the web we have today is not that. you’d need a new protocol.
Yeah, that’s true :(
I don’t know. Basically, if you already know what you want, maybe you only want to type down a couple of statements (maybe even from a template or a tutorial that you found online), modify some stuff and then hit enter. And maybe this modifying of language could be the “browsing” part of the browser.
If you look at it like this it would also be immediate and precise. You would only need to add very good code completion tools, e.g. when you click on a noun, you see all the attributes it has in your ontology. Much like in a IDE. There you also “browse” the space of all potential programs with the interface of language with code completion for keywords and defined concepts, which act like links in traditional browsers.
In contrast, the semantic web is like a open, global code base, where everybody can contribute to. And traditional browser could not successfully implement a language interface because the code base had no defined semantic, this would be possible for the semantic web. And using LLMs, it could be propagated into other web paradigms.
there are already text-based browsers like qutebrowser
hypercard
Awesome! Thanks for the references, didn’t know there were already some applications in this direction
Would be cool to have a link on the original blog. I totally missed that the whole thing moved. But great project in general.
I think it can very well be applied to the Threadiverse.
I think the most pressing issue is sin#7 if applied to communities.
In an abstract sense, I see the Threadiverse as inversion of Mastodon: instead of posting messages to a personal account, which tags may be interesting to you to discover other similar content, in the Threadiverse, users post to hashtags and who posted them is only secondary important to you, but may be used to discover more content by the same account.
Cool. Well, the feedback until now was rather lukewarm. But that’s fine, I’m now going more in a P2P-direction. It would be cool to have a way for everybody to participate in the training of big AI models in case HuggingFace enshittifies
Yeah thats a good point. Also given that nodes could be fairly far apart from one another, this could become a serious problem.
Currently the nodes only recommend music (and are not really good at it tbh). But theoretically, it could be all kinds of machine learning problems (then again, there is the issue with scaling and quality of the training results).
Good point
Yeah, the whole thing was a bit low-effort. Next post will be more professional.
It was just a demo. But when I develop it further, it will be either a client or a whole instance-configurator (hopefully).
Its similar to what the muni-town/weird-people tried to do, but this time with language.
Thanks :) I guess I shouldn’t have linked it to vibe coding.
Isn’t NodeBB compatible with the Fediverse by now?
What do you think of this? I’m all ears for your thoughts :)
I think a link between your idea and mine can be found in the work of writer Evgeny Morozov (https://mondediplo.com/2024/08/07ai-cold-war), who did some interesting research of alternative forms of how the internet could have developed including a project by the chilean government called “Cybersyn” (in his podcast “The Santiago Boys”, https://open.spotify.com/show/7xlRxnooUnl48JVo726YXn). Although it was pretty centralized and not exactly Amazon, more like a socialist distribution system between industries. Well, its a very interesting podcast anyways …
I made a first prototype here: https://github.com/bluebbberry/MyceliumWebServer. Its recommends songs to the users. You can see it here: https://techhub.social/@myceliumweb and try it out by posting to #babyfungus on Mastodon.
You can do AI in an ethical way by making it more decentralized. The idea behind the mycelial web is to realize it based on volunteer computing, meaning that everybody can contribute computing power. And then I can say, for example: use my models, which was trained with all these other models, on this Amazon alternative to recommend me stuff. And the AI model was trained on my PC and runs on my PC (just wasn’t trained solely with my computing power or my data alone).
For me, social media clients already act as a kind of browser. Theoretically, if all sides on the web would be connected to ActivityPub, you could access the whole web over a social web client. There exist bridges to the semantic web and of course (regardless of whether this is positive or negative) you also have bots connecting the social web to AI.