I think the best web page is a photo with a page of paper of handwriting : several photos if one has a lot to say.
Today’s bandwidth and powerful computers can easily handle it
I mean, that would work, and it would be a fun site to visit; but visually impaired users and non-English speakers would have difficulty without alt text, and at that point you’d may as well just write a regular page. However, as I said, it would be fun.
yeah but also fuck brave
I flip back and forth between Brave and Tor Browser, depending on which one appears less fingerprintable; and I’ve disabled all of the analytics.
The more things you block, the more unique and fingerprintable you become. Blocking JavaScript altogether may mitigate some of that, but you can be fingerprinted even without JS.
Tor is a little better because they make your browser blend pretty well with other Tor browsers, so instead of being unique 1 of 1 you’re more like 1 out of all Tor users.
I haven’t looked into this in a couple years, but that is my takeaway last time I went down the privacy/fingerprint rabbit hole.
I know, and I’m still researching the best way to mitigate this. So far, I’ve come away with the impression that Tor Browser and Brave do the best jobs of minimising fingerprinting, otherwise I would have just disabled JS in Vanadium and called it a day.
(Not talking about a specific browser, just in general) Maybe I’m misunderstanding but when the VPN makes a request for the page information the request isn’t forwarding the browser information is it? So wouldn’t most of that be mitigated there?
As in the VPNs sever making the request should show when they scrape that information, not the end user. Maybe I’m not understanding that though.
A VPN doesn’t alter the requests your browser is making. It just masks your IP address. So any information about your browser is still sent. The exception would be if your VPN provides some sort of tracker/ad blocking feature where it can block certain requests. But it’s not really a magic switch that prevents websites from tracking you.
Yeah if it’s sending the data and not searching from the server itself it would make it easier to track you having higher security almost. Because it would make your identifiers more unique if everyone else wasn’t… That’s food for thought
it’s still owned by a homophobe that loves crypto, and is likely an antivaxxer.
He was run out of Mozilla after only eleven days as CEO, and he helped found it!
the guy is an asshole, and he’s very likely using brave money for evil shit.
As it happens, Brave started crashing for no apparent reason shortly after I posted this, so I’m back on Tor Browser. ¯\_(ツ)_/¯
He’s also one of the inventors of Javascript as a browser feature. I feel like that would matter to OP.
Yes, I’m aware of the irony.
holy shit he’s more evil than i thought
People in this thread who aren’t web devs: “web devs are just lazy”
Web devs: Alright buddy boy, you try making a web site these days with the required complexity with only HTML and CSS. 😆 All you’d get is static content and maybe some forms. Any kind of interactivity goes out the door.
Non web devs: “nah bruh this site is considered broken for the mere fact that it uses JavaScript at all”
Making a static site is a piece of piss. There are even generators on npm.
Not sure that was the issue. I mean more that if you use only HTML and CSS all you’ll be able to create would be static sites that only change the contents of the page by full reloads. 🙂
A lot of this interactivity is complete bullshit, especially on sites that are mostly just for static data like news articles, the JS is there for advertisement and analytics and social media and other bullshit
News site dev here. I’ll never build a site for this company that relies on js for anything other than video playback (yay hls patents, and they won’t let me offer mp4 as an alternative because preroll pays our bills, despite everyone feeling entitled to free news with no ads)
it sounds like you’re saying there’s an easy solution to get websites that don’t have shit moving on you nonstop with graphics and non-content frames taking up 60% of the available screen
it’s crazy that on a 1440p monitor, I still can’t just see all the content I want on one screen. nope, gotta show like 20% of it and scroll for the rest. and even if you zoom out, it will automatically resize to keep proportion, it won’t show any of the other 80%
I’m not a web dev. but I am a user, and I know the experience sucks.
if I’m looking at the results of a product search and I see five results at a time because of shitty layout, I just don’t buy from that company
I had a bit of trouble following that first paragraph. I don’t understand what it is that you say it sounds like I’m saying.
Either way, none of what you wrote I disagree with. I feel the same. Bad design does not elicit trust.
I’m saying your point about static content being all we would get sounds great
lol, no argument here, to be fair 😄
I’ll take an API and a curl call over JavaScript any day of the week.
If I didn’t input it myself with a punch card I refuse to run it.
I unironically use Lynx from my home lab s when I’m ssh’d in snce it’s headless. Sometimes at work I miss the simplicity. I used to use Pine for Gmail as well. 😁
😆 that do be what they sound like
That site is literally just static content. Yes JS is needed for interactivity, but there’s none here
If you have static content, then sure, serve up some SSR HTML. But pages with even static content usually have some form of interactivity, like searching (suggestions/auto-complete), etc. 🤷♂️
Search is easier to implement without Javascript than with.
<form method="GET" action="/search"> <input name="q"> <input type=submit> </form>
Does that little snippet include suggestions, like I mentioned? Of course it’s easier with less functionality.
Back in my day, we’d take that fully-functional form and do progressive enhancement to add that functionality on top with js. You know, back when we (or the people paying us) gave a fuck.
It’s not about using js or not, it’s about failing gracefully. An empty page instead of a simple written article is not acceptable.
An empty page isn’t great, I would indeed agree with that.
I can do it but it’s hard convincing clients to double their budget for customers with accessible needs they’re not equipped to support in other channels.
That being said, my personal sites and projects all do it. And I’m thankful for accessible website laws where I’m from that make it mandatory for companies over a certain size to include accessible supports that need to work when JS is disabled.
What country or area would that be?
And what do you mean by “do it”? What is it exactly that you do or make without JavaScript?
Some provinces in Canada have rules that businesses’ websites must meet or exceed the WCAG 2.0 accessibility guidelines when they exceed a certain employee headcount, which includes screen reader support that ensures all content must be available to a browser that doesn’t have JavaScript enabled.
Also the EU and technically a lot of US sites that provide services to or for the government have similar requirements. The latter is largely unenforced though unless you’re interacting with states that also have accessibility laws.
And honestly a ton of sites that should be covered by these requirements just don’t care or get rubber stamped as compliant. Because unless someone actually complains they don’t have a reason to care.
I kind of thought the EU requirements that have some actual penalties would change this indifference but other than some busy accessibility groups helping people that already care, I haven’t heard a lot about enforcement that would suggest it’s actually changed.
That’s excellent.
And what do you make that doesn’t include JavaScript? Like what kind of software/website/content? If you don’t mind sharing, of course.
Mostly marketing and informational websites for the public. Businesses, tourism spots, local charities and nonprofits, etc. Nothing that’s going to change the world but hopefully makes somebody’s day a little easier when they need to look something up.
Good stuff!
It doesn’t have to not include JavaScript, that would be quite difficult and unreasonable. Accessible sites are not about limiting functionality but providing the same functionality.
I haven’t gone fully down the rabbit hole on this but my understanding is even something like Nuxt if you follow best practices will deliver HTML that can be interacted with and serve individual pages.
That said, screen readers and other support shouldn’t require running without any JavaScript. Having used them to test sites that might be the smart approach but they actually have a lot of tools for announcing dynamic website changes that are built into ARIA properties at the HTML level so very flexible. There are of course also JavaScript APIs for announcing changes.
They just require additional effort and forethought to implement and can be buggy if you do really weird things.
Ehhhhh it kinda’ depends. Most things that are merely changing how something already present on the page is displayed? Probably don’t need JS. Doing something cool based on the submit or response of a form? Probably don’t need JS. Changing something dynamically based off of what the user is doing? Might not need JS!
Need to do some computation off of the response of said form and change a bunch of the page? You probably need JS. Need to support older browsers simply doing all of the previously described things? Probably need JS.
It really, really depends on what needs to happen and why. Most websites are still in the legacy support realm, at least conceptually, so JS sadly is required for many, many websites. Not that they use it in the most ideal way, but few situations are ideal in the first place.
A lot of this is just non-tech savvy people failing to understand the limitations and history of the internet.
(this isn’t to defend the BS modern corporations pull, but just to explain the “how” of the often times shitty requirements the web devs are dealing with)
Virtually any form validation besides the basics HTML provides is enough to require JS, and input validation (paired with server-side validation ofc) saves both user frustration and bandwidth
Of course it depends, like all things. But in my mind, there’s a few select, very specific types of pages that wouldn’t require at least a bit of JavaScript these days. Very static, non-changing, non-interactive. Even email could work/has worked with HTML only. But the experience is severely limited and reduced, of course.
Stop, can only get so erect. Give me that please than the bullshit I have to wade trough today to find information. When is the store open. E-mailadress/phone. Like fuck if I want to engage
😆 F—ck, I hear you loud and clear on that one. But that’s a different problem altogether, organizing information.
People suck at that. I don’t think they ever even use their own site or have it tested on anyone before shipping. Sometimes it’s absolutely impossible to find information about something, like even what a product even is or does. So stupid.
You can say fuck on the internet
I also have the right to self-censor myself for effect. 👍👍
I would argue that a lot it scripting can and should be done server side.
That would make the website feel ultra slow since a full page load would be needed every time. Something as simple as a slide out menu needs JavaScript and couldn’t really be done server side.
When if you said just send the parts of the page that changed, that dynamic content loading would still be JavaScript. Maybe an iframe could get you somewhere but that’s a hacky work around and you couldn’t interact between different frames
a slide out menu needs JavaScript
A slide out menu can be done in pure CSS and HTML. Imho, it would look bad regardless.
When if you said just send the parts of the page that changed, that dynamic content loading would still be JavaScript
OP is trying to access a restaurant website that has no interactivity. It has a bunch of static information, a few download links for menu PDFs, a link to a different domain to place an order online, and an iframe (to a different domain) for making a table reservation.
The web dev using javascript on that page is lazy, yet also creating way more work for themself.
https://htmx.org/ solves the problem of full page loads. Yes, it’s a JavaScript library, but it’s a tiny JS library (14k over the wire) that is easily cached. And in most cases, it’s the only JavaScript you need. The vast majority of content can be rendered server side.
While fair, now you have to have JavaScript enabled in the page which I think was the point. It was never able having only a little bit. It was that you had to have it enabled
Yes, it is unfortunate that this functionality is not built-in to HTML/browsers to begin with. The library is effectively a patch for the deficiencies of the original spec. Hopefully it can one day be integrated into HTML proper.
Until then, HTMX can still be used by browsers that block third party scripts, which is where a lot of the nasty stuff comes from anyway. And JS can be whitelisted on certain sites that are known to use it responsibly.
So, your site still doesn’t work without JS but you get to not use all the convenience React brings to the table? Boy, what a deal! Maybe you should go talk to Trump about those tariffs. You seem to be at least as capable as Flintenuschi!
JS is just a janky hotfix.
As it was, HTML was all sites had. When these were called “ugly”, CSS was invented for style and presentation stuff. When the need for advanced interactivity (not doable on Internet speeds of 20-30 years ago), someone just said “fuck it, do whatever you want” and added scripting to browsers.
The real solution came in the form of HTML5. You no longer needed, and I can’t stress this enough, Flash to play a video in-browser. For other things as well.
Well, HTML5 is over 15 years old by now. And maybe the time has come to bring in new functionality into either HTML, CSS or a new, third component of web sites (maybe even JS itself?)
Stuff like menus. There’s no need for then to be limited by the half-assed workaround known as CSS pseudoclasses or for every website to have its own JS implementation.
Stuff like basic math stuff. HTML has had forms since forever. Letting it do some more, like counting down, accessing its equivalent of the Date and Math classes, and tallying up a shopping cart on a webshop seems like a better fix than a bunch of frameworks.
Just make a standardized “framework” built directly into the browser - it’d speed up development, lower complexity, reduce bloat and increase performance. And that’s just the stuff off the top of my head.
Something as simple as a slide out menu needs JavaScript and couldn’t really be done server side.
I’m not trying to tell anyone how to design their webpages. I’m also a bit old fashioned. But I stopped making animated gimmicks many years ago. When someone is viewing such things on a small screen, in landscape mode, it’s going to be a shit user experience at best. That’s just my 2 cents from personal experience.
I’m sure there are examples of where js is necessary. It certainly has it’s place. I just feel like it’s over used. Now if you’re at the mercy of someone else that demands x y and z, then I guess you gotta do what you gotta do.
If you want to zoom into a graph plot, you want each wheel scroll tick to be sent to the server to generate a new image and a full page reload?
How would you even detect the mouse wheel scroll?
All interactivity goes out the door.
“nah bruh this site is considered broken for the mere fact that it uses JavaScript at all”
A little paraphrased, but that’s the gist.
Isn’t there an article just today that talks about CSS doing most of the heavy-lifting java is usually crutched to do?
I did webdev before the framework blight. It was manual php, it was ASP, it was soul-crushing. That’s the basis for my claim that javascript lamers are just lazy, and supply-chain splots waiting to manifest.
CSS doing most of the heavy-lifting java is usually crutched to do
JavaScript you mean? Some small subset of things that JavaScript was forced to handle before can be done in CSS, yes, but that only goes for styling and layout, not interactivity, obviously.
I did webdev before the framework blight. That’s the basis for my claim that javascript lamers are just lazy
There is some extremely heavy prejudice and unnecessary hate going on here, which is woefully misdirected. Well get to that. But the amount of time that has passed since you did web dev might put you at a disadvantage to make claims about web development these days. 👍
Anyway. Us JavaScript/TypeScript “lamers” are doing the best with what we’ve got. The web platform is very broken and fragmented because of its history. It’s not something regular web devs can do much about. We use the framework or library that suits us best for the task at hand and the resources we are given (time, basically). It’s not like any project will be your dream unicorn project where you get to decide the infrastructure from the start or get to invent a new library or a new browser to target that does things differently and doesn’t have to be backwards compatible with the web at large. Things don’t work this way.
Don’t you think we sigh all day because we have to monkey patch the web to make our sites behave in the way the acceptance criteria demand? You call that lazy, but we are working our knuckles to the bone to make things work reasonably well for as many people as we can, including accessibility for those with reduced function. It’s not an easy task.
… “Lazy.” I scoffed in offense, to be honest with you.
It’s like telling someone who made bread from scratch they’re lazy for not growing their own wheat, ffs.
Let’s see you do better. 👍👍👍👍👍👍
Skill issue - on the devs side.
A lot of pages even fail if you only disable 3rd-party scripts (my default setting on mobile).
I consider them broken, since the platform is to render a Document Object Model; scripting is secondary functionality and having no fallbacks is bad practice.
Imagine if that were a pdf/epub.no fallbacks is bad practice.
This is how you know they’re extra lazy – no “please enable javascript because we suck and have no noscript version”.
people who don’t know what graceful degradation is make me sad
It reminds me of flash when it first gained popularity.
“Please enable flash so you can see our unnecessary intro animation and flash-based interface” at, like, half of local restaurant websites
wild thing is that with modern css and local fonts (nerdfonts, etc), you can make a simple page with a modern grid and nested css without requiring a single third party library or js.
devs are just lazy.
Devs are lazy but also product people and design request stuff that even modern CSS cannot do
devs are just lazy.
*cost-efficient. At this point it’s a race to the bottom.
and its not even the devs. its the higher ups forcing them to do shit that won’t work.
Yep, burnout-rate increasing, companies favoring quantity over quality.
Have you ever built anything more complex with CSS and HTML5? It is a massive pain
if im building something complex im using an actual server side language not a javascript framework
You use both
Personally, I love server-side rendering, I think it’s the best way to ensure your content works the way YOU built it. However, offloading the processing to the client saves money, and makes sense if you’re also planning on turning it into an electron app.
I feel it’s better practice to use a DNS that blocks traffic for known telemetry and malware.
Personally, I used to blacklist all scripts and turn them on one at a time till I had the functionality I needed.
please never turn anything into an electron app
But they’re not pdf/e-pub, they’re live pages that support changing things in the DOM dynamically. I’m sorry, I’m not trying to be mean but people not wanting scripting on their sites are a niche inside a niche, so in terms of prioritising fixing things that’s a very small audience with a very small ROI if done they might require a huge rewrite. It’s just not financially feasible for not much of a reason other than puritan ones.
More simple websites have some advantages like, less work to maintain, responsivity and accessibility by default.
Sure, what is already, that is. It starts already at choosing the frameworks.
All modern browsers have Javascript enabled by default. A good dev targets and tests for mainstream systems.
I have 13 sites whitelisted to allow JS. The internet is fairly usable for me without JS.
Same. This is the way.
because modern webdevs cant do anything without react
It’s like JavaScript is used way over its reasonable use cases and you need a thick layer of framework indirection to be able to do anything, and yet still sucks.
There are plenty of modern frameworks most of which as better. Even the lightweight ones need JavaScript.
I’m a webdev. I agree. I like react.
Reactjs exists mostly due to how well know it is
I disagree,I did fullstack for years without react, I used the much superior Vue.js
I like React, but Svelte really hits the spot. But no matter what framework you use, let’s all be glad that we’re not like those reality averse people complaining in this thread 🙏
True, lol
because
modernyoung/unskilled webdevs cant do anything without react
Yes.
Many people won’t even know what we’re talking about; to them it’s like saying “the sheer amount of websites that are unusable without HTML”. But I use uBlock Origin in expert mode and block js by default; this allows me to click on slightly* fishy links without endangering my setup or immediately handing my data over to some 3rd party.
So I’m happy to see news websites that do not require js at all for a legible experience, and enraged that others even hide the fucking plain text of the article behind a script. Even looking at the source code does not reveal it. And I’m not talking about paywalls.
* real fishy links go into the Tor browser, if I really want to see what’s behind them.
Said it on a top-level comment as well, but I use “medium mode” on uBlock (weirdly not advertised, but easy enough to enable: https://github.com/gorhill/ublock/wiki/Blocking-mode:-medium-mode). I’ve found it to be a good middle ground between expert mode which is basically noscript, and rawdogging it.
If I encounter a site that I can’t visit unless I enable JS, then I leave.
I use uBlock medium mode, and if I can’t get a website to work without having to enable JavaScript, then I just leave the website.
I generally do the same. In fact, on desktop, uBO is set to hard mode. Unfortunately, I do need to access these sites from time to time.
If I’d want to write a site with js-equivalent functionality and ux without using js, what would my options be?
WASM and cry because you can’t directly modify the DOM without JS.
You can’t modify the DOM.
But some most dynamicity can stay - sites can be built freely server-side, and even some “dynamic” functionality like menus can be made using css pseudoclasses.
Sure, you won’t have a Google Docs or Gmail webapp, but 90% of stuff doesn’t actually need one.
A basic website doesn’t require js.
A webshop, for example, does for the part around adding to cart and checkout - but it doesn’t for merely browsing.
For a web store you probably only need Javascript for payment processing. Insofar as I’ve seen pretty much all of the widgets provided by the card processors outright require Javascript (and most of them are also exceedingly janky, regardless of what they look like on the outside to the user).
You definitely don’t need Javascript just for a shopping cart, though. That can all be done server side.
You can’t use web assembly without JavaScript to initialize it.
HTML and CSS can do quite a lot, and you can use PHP or
cgi-bin
for some scripting.Of course, it’s not a perfect alternative. JavaScript is sometimes the only option; but a website like the one I was trying to use could easily have just been a static site.
The problem is that HTML and CSS are extremely convoluted and unintuitive. They are the reason we don’t have more web engines.
htmx or equivalent technologies. The idea is to render as much as possible server side, and then use JS for the things that can’t be rendered there or require interactivity. And at the very least, serve the JS from your server, don’t leak requests to random CDNs.
Htmx requires JS. At that point you already failed in the eyes of the purists. And CDNs exist for a reason. You can’t expect a website to guarantee perfect uptime and response times without the use of CDNs. And don’t get me started on how expensive it would be to host a globally requested website without a CDN. That’s a surefire way to get a million dollar bill from amazon!
I mean you could build a site in next.js, ironically. Which is very counter intuitive because it literally is js you are writing, but you can write it to not do dynamic things so it effectively would be a static server rendered site that, if js is enabled, gets for free things like a loader bar and quick navigation transitions. If js is disabled it functions just like a standard static site.
I just use NOSCRIPT to do this and its annoying to visit websites that need Javascript, but its handy with noscript cause I just turn on the Javascript the website needs for functionality (this should also speed up load times)
Sometimes if am using a browser without extension support (like Gnome WEB) I just disable Javascript on Websites or frontends that dont need it like Invidious (if am facing issues)i just add any site that breaks without js to my list of sites to eradicate adguard filter and send it to /dev/null
🤨 The sheer NUMBER of websites, you mean? Yeah, that is sometimes annoying.
Ok boomer
As a web developer, I see js as a quality improvement. No page reloads, nice smooth ui. Luckily, PHP times has ended, but even in the PHP era disabling jQuery could cause problems.
We could generate static html pages It just adds complexity.
Personally I use only client-side rendering, and I think, that’s the best from dev perspective. Easy setup, no magic, nice ui. And that results in blank page when you disable js.
If your motivation is to stop tracking.
- replace all foreign domain sources to file uris. e.g.: load google fonts from local cache.
- disable all foreign script files unless it’s valid like js packages from public CDNs, which case load them from local cache.
If your motivation is to see old html pages, with minimal style, well it’s impossible to do them reliably. If you are worried about closed-source js. You shouldn’t be. It’s an isolated environment. if something is possible for js and you want to limit its capability, contribute to browsers. That’s the clear path.
I can be convinced. What’s your motivation?
If your motivation is to see old html pages, with minimal style, well it’s impossible to do them reliably.
Not only should your site be legible without JS, it should be legible without CSS, and infact without rendering the effects of the HTML tags (plain text after striping the tags).
At one point in time this was the standard, that each layer was an enhancement on top of the one below it. Its seems that web devs now cannot even imagine writing a news article or a blog post like, something that has the entirety of its content contained within its text. A plain .txt file renders “reliably” on anything. You are the one adding extra complexity in there and then complaining that you’re forced to add even more to deal with the consequences of your actions.
What I meant is that you cannot turn any existing webpages to a basic page with some simple tricks like disabling js. That would be a never-ending fight.
You are the one adding extra complexity
I’m not the one defining the business requirement. I could build a site with true progressive enhancement. It’s just extra work, because the requirement is a modern page with actions, modals, notifications, etc.
There are two ways I can fulfill this. SSR with scripts that feel like hacks. Or CSR. I choose CSR, but then progressive enhancement is now an extra work.
Luckily, PHP times has ended
I guess I earn my living with nothing then. What an absurd take. PHP powers WordPress, Shopware, Typo3 and many other CMS systems and is still very strong. Especially in Europe.
(Apart from that, a lot of people shitting on PHP base it on outdated knowledge or have never used it at all. With modern OOP practices, you can write really clean code.)
I was developing with Laravel until 2019. I agree that you can write clean code with it. Still, there are many better options nowadays, I switched to nodejs because I can use typescript for both backend and frontend, and I’m happier with it. Although js is not a great language, typescript is almost perfect. But it’s not only me who switched, people ditching php because there are better options.
This community is full of older people who have never done modern development
It suggests using minimal js, I use react the same way, whatever I can do with css, I do it with css. But I am not going to footgun myself. I start the app with react because at some point I will need react.
Fuck yeah!
Bookmarked for future use. CSS has developed a lot since I started getting aquainted with it.
I didn’t read it completely, is browser coverage addressed in the article?
The only non-heated comment. I appreciate it. I will read it.
The only non-heated comment.
You mean people replying to you? I wouldn’t call those heated, rather derisive. Just like your own original comment. You come across as presumptuous and pretending to be more knowledgeable than you really are. People react.
If your motivation is to see old html pages, with minimal style
Huh? i just want to see a web page. Usually a news article, i.e. text with few styling elements. In other words, HTML.
For most use cases JS is not required.well it’s impossible to do them reliably
Huh again? Why?
If you are worried about closed-source js.
Isn’t it always open, i.e. one can read the script the browser loads if one is so inclined? No, that’s not the point at all. JS increases the likelihood of data mining, by ordes of magnitude. And most addons that block js also block 3rd party requests generally.
Use as much js as you like (most third party stuff is not really up to the web dev anyhow), but the page must always fail gracefully for those who do not like it, or browse the web in some non-standard way. An empty page is not an option.
Please also read some of the other (top level) comments here.
You were completely fine with slow page reloads blinding you when the theme was dark. I’m speaking to those who appreciate modern tech.
But anyways, unfortunately javascript obfuscation is a common thing.
Obfuscation, OK.
Look, I’m willing to have a conversation with you, but you need to address my points first, that is if you want one too.
I can’t take it seriously because of the noise in your text like “Huh?”. If you like to have a conversation, please be more open next time.
Source code is the code before some kind of transpilation. Obfuscated code is not source code.
I get it, you just need the content. But why would you reload the page when you’re just about to get the next news in the page. Isn’t it better to just update that part?
Why is it “impossible to do them reliably” - without js presumably?
why would you reload the page when you’re just about to get the next news in the page. Isn’t it better to just update that part?
Sounds like you’re thinking about web apps, when most people here think about web pages.
Why is it “impossible to do them reliably” - without js presumably?
What I meant is that you cannot turn any existing webpages to a basic page with some simple tricks like disabling js. That would be a never-ending fight.
In your words I hear that as a web dev, you rely 100% on javascript.
even in the PHP era disabling jQuery could cause problems.
WTF. Do you think jQuery is what JavaScript used to be called or something? Pretty much everything you wrote is insane, and I specifically think that because I’ve been building webpages for 25 years. You sure never heard of progressive enhancement.
It seems you misunderstood me.
There were horrible tricks and hacks that were addig not only ux improvements but useful content. We used jquery for many of those things. That’s why I wrote it, and for the legacy vibe.
Disabling js would have broken that site as well, reinforcing my point that it was never a reliable solution to disable js.
As a web dev, and primarily user, I like my phone having some juice left in it.
The largest battery hog on my phone is the browser. I can’t help wonder why.
I’d much rather wait a second or two rather than have my phone initialize some js framework 50 times per day.
Dynamic HTML can be done - and is - server-side. Of course, not using a framework is harder, and all the current ones are client-side.
Saying making unbloated pages is impossible to do right just makes it seem like you’re ill informed.
On that note - “Closed-source” JS doesn’t really exist (at least client-side) - all JS is source-availiable in-browser - some may obfuscate, but it isn’t a privacy concern.
The problem is that my phone does something it doesn’t have to.
Having my phone fetch potentially 50 MB (usually 5-15) for each new website is a battery hog. And on a slow connection - to quote your words, “great UX”.
The alternative is a few KB for the HTML, CSS and a small amount of tailor-made JS.
A few KB’s which load a hundered times faster, don’t waste exorbitant amounts of computing power - while in essence losing nothing over your alternative.
“Old pages with minima style” is a non-sequitur. Need I remind you, CSS is a thing. In fact, it may be more reliable than JS, since it isn’t turing-complete, it’s much simpler for browser interpreters to not fuck it up. Also, not nearly the vulnerability vector JS is.
And your message for me and people like me, wanting websites not to outsource their power-hogging frameworks to my poor phone?
Go build your own browser.
What a joke.
You can build some very light pages with JavaScript. JavaScript isn’t the issue, it is the large assets.
Who said making unbloated pages impossible? Your comment would be more serious without your emotions.
Source code is the source code which gets transformed to some target code. An obfuscated code is not source code.
A reminder, in the past, large pages downloaded all stuff at once. In contrast, with dynamic imports the first load is much much faster. And that matters most. And any changes in dynamic content would just require the dynamic data to be downloaded. My phone lasts at least 2 days with one charge (avg usage), but I charge it every night, that’s not an issue.
Source code is the code devs write.
For compiled languages like C, only the compiled machine code is made available to the user.
JS is interpreted, meaning it doesn’t get compiled, but an interpreter interprets source code directly during runtime.
Obfuscsted code, while not technically unaltered source code is still source code. Key word being unaltered. It isn’t source code due to the virtue of not being straight from the source (i.e. because it’s altered).
However, obfuscated code is basically source code. The only things to obfuscate are variable and function names, and perhaps some pre-compile order of operations optimizations. The core syntax and structure of the program has to remain “visible”, because otherwise the interpreter couldn’t run the code.
Analyzing obfuscated code is much closer to analyzing source code than reverse-engineering compiled binaries.
It may not be human-readable. But other programs systems can analyze (as they can even compiled code), but more importantly - they can alter it in a trivial manner. Because it’s source code with basically names censored out. Which makes evaluating the code only a bit harder than if it were truly “closed-source”.
That’s why website source code is basically almostsource-available.
A reminder, in the past, large pages downloaded all stuff at once. In contrast, with dynamic imports the first load is much much faster. And that matters most. And any changes in dynamic content would just require the dynamic data to be downloaded.
Unfortunately, you’re very mistaken.
In the past, pages needed to download any stuff they want to display to the user. Now, here’s the kicker: that hasn’t changed!
Pages today are loaded more dinamically and sensibly. First basic stuff (text), then styles, then scripts, then media.
However, it’s not Angular, React Bootstrap or any other framework doing the fetching. It’s the browser. Frameworks don’t change that. What they do, instead, is add additional megabytes of (mostly) bloat to download every day or week (depending on the timeout).
Any web page gets HTML loaded first, since the dawn of the Web. That’s the page itself. Even IE did that. At first, browsers loaded sequentially, but then they figured out it’s better UX to load CSS first, then the rest. Media probably takes precedence to frameworks as well (because thet’s what the user actually sees).
Browsers are smart enough to cache images themselves. No framework can do it even if it wanted to because of sandboxing. It’s the browser’s job.
What frameworks do is make devs’ lives easier. At the cost of performance for the user.
That cost is multiple-fold: first the framework has to load. In order to do that, it takes bandwidth, which may or may not be a steeply-priced commodity depending on your ISO contract. Loading also takes time, i.e. waiting, i.e. bad UX.
Other than that, the framework beeds to run. That uses CPU cycles, which wastes power and lowers battery life. It’s also less efficient than the browser doing it because it’s a higher level of abstraction than letting the browser do it on its own.
With phones being as trigger-happy about killing “unused” apps, all the frameworks in use by various websites need to spin up from being killed as often as every few minutes. A less extreme amount of “rebooting” the framework happens when low-powered PCs run oit of RAM and a frameworked site is chosen by the browser to be “frozen”.
What a framework does is, basically, fill a hole in HTML and CSS - it adds functionality needed for a website which is otherwise unattainable. Stuff like cart, checkout, some complex display styles, etc.
All of this stuff is fully doable server-side. Mind you, login is so doable it didn’t even slip onto my little list. It’s just simpler to do it all client-side for the programmer (as opposed to making forms and HTML requests that much more often, together with the tiny UX addition of not needing to wait for the bac(-and-forth to finish.
Which itself isn’t really a problem. In fact, the “white flashes” are more common on framework sites than not.
When a browser loads any site, it loads HTML first. That’s “the site”. The rest is just icing on the cake. First is CSS, then media and JS (these two are havily browser dependent as far as load priority goes).
Now comes the difference between “classic”, “js-enhanced” and “fully js-based” sites.
A classic site loads fast. First HTML. The browser fetches the CSS soon enough, not even bothering to show the “raw HTML” for a few hundered miliseconds if the CSS loads fast enough. So the user doesn’t even see the “white flash” most of the time, since networks today are fast enough.
As the user moves through different pages of the site, the CSS was cached - any HTML page wishing to use the same CSS won’t even need to wait for it to load again!
Then there’s the js-enhanced site. It’s like the classic site, but with some fancy code to make it potentially infinitely more powerful. Stuff like responsive UI’s and the ability to do fancy math one would exoect of a traditional desktop/native app. Having JS saves having to run every little thing needing some consideration to the server when the browser can do it. It’s actually a privacy benefit, since a lot less things need to leave the user’s device. It can even mend its HTML, its internal structure and its backbone to suit its needs. That’s how powerful JS is.
But, as they say, with great power comes great responsibility. The frameworked-to-hell site. Initially, its HTML is pretty much empty. It’s less of like ordering a car and more of building a house. When you “buy the car” (visit the site), it has to get made right in front of your eyes. Fun the first few times, but otherwise very impractical.
A frameworked site also loads slower by default - the browser gets HTML first, then CSS. Since there’s no media there yet, it goes for the JS. Hell, some leave even CSS out of the empty shell of the page when you first enter so you really get blasted by the browser’s default (usually white, although today theme-based) CSS stylesheet. Only once the JS loads the framework can the foundation of the site (HTML) start being built.
Once that’s been built, it has CSS, and you no longer see the white sea of nothing.
As you move through pages of the site, each is being built in-browser, on-demand. Imagine the car turning into a funhouse where whenever you enter a new room, the bell rings. An employee has to hear it and react quickly! They have to bring the Buld-A-Room kit quickly and deploy it, lest you leave before that happens!
Not only is that slow and asinine, it’s just plain inefficient. There’s no need for it in 99% of cases. It slows stuff down, creates needless bandwidth, wastes needless data and wastes energy.
There’s another aspect to frameworked sites’ inefficiency I’d like to touch.
It’s the fact that they’re less “dynamic” and more “quicksand”.
They change. A lot. Frameworks get updates, and using multiple isn’t even unheard of. Devs push updates left and right, which are expected to be visible and deployed faster than the D-Day landings.
Which in practice means that max resource age is set very low. Days, maybe even hours. Which means, instead of having the huge little 15 MB on-average framework fetched once a week or month, it’s more like 4 to dozens of times per week. Multiply by each site’s preferred framework and version, and add to that their own, custom code which also takes up some (albeit usually less-than-frameork) space.
That can easily cross into gigabytes a month. Gigabytes wasted.
Sure, in today’s 4K HDR multimedia days that’s a few minutes of video, but it isn’t 0 minutes of nothing.
My phone also reliably lasts a day without charge. It’s not about my battery being bad, but about power being wasted. Do you think it normal that checking battery use, Chrome used 64% according to the abdroid settings?
You bet I tried out Firefox the very same day. Googling for some optimizations led me down a privacy rabbit-hole. Today I use Firefox, and battery use fell from 64% to 24%. A 40% decrease! I still can’t believe it myself!
I admit, I tend to use my phone less and less so my current 24% may not be the best metric, but even before when I did, the average was somewhere between 25% and 30%.
There’s a middle-ground in all of this.
Where the Web is today is anything but.
The old days, while not as golden they might seem to me are also not as brown as you paint them out to be.