so now proton completely blocking account creation through their onion adress? I have standard protection, javascript enabled. Time to swith for those who use this service as they are ditching tor and switzerland?
so now proton completely blocking account creation through their onion adress? I have standard protection, javascript enabled. Time to swith for those who use this service as they are ditching tor and switzerland?
Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.
It’s one thing for AI to hallucinate cases, and another entirely to forget there’s a difference between
=
and==
when the AI bulk generates code. One slip up and my security and privacy could be compromised.You’re welcome to buy in to the AI hype. I remember the dot com bubble.
We’ve been using ‘AI’ for quite some time now, well before the advent of AI Rice Cookers. It’s really not that new.
I use AI when I master my audio tracks. I am clinically deaf and there are some frequency ranges that I can’t hear well enough to master. So I lean heavily on AI. I use AI for explaining unfamiliar code to me. Now, I don’t run and implement such code in a production environment. You have to do your due diligence. If you searched for the same info in a search engine, you still have to do your due diligence. Search engine results aren’t always authoritative. It’s just that Grok is much faster at searching and in fact, lists the sources it pulled the info from. Again, much faster than engaging a search engine and slogging through site after site.
If you want to trade accuracy for speed, that’s your prerogative.
AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I’ve seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.
What’s the point of a summary that doesn’t actually summarize the facts accurately?
Just because I find an inaccurate search result does not mean DDG is useless. Never trust, always verify.
There it is. The bold-faced lie.
“I don’t blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly.”
Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.
Hey there BluescreenOfDeath, sup. Good to meet you. My name is ‘Nobody’.
It’s easy to post on a forum and say so.
Maybe you even are actually asking AI questions and researching whether or not it’s accurate.
Perhaps you really are the world’s most perfect person.
But even if that’s true, which I very seriously doubt, then you’re going to be the extreme minority. People will ask AI a question, and if they like the answers given, they’ll look no further. If they don’t like the answers given, they’ll ask the AI with different wording until they get the answer they want.
There is no cost associated with your disbelief.
You can’t practically “trust but verify” with LLMs. I task an LLM to summarize an article. If I want to check its work, I have to go and read that whole article myself. The checking takes as much time as just writing the summary myself. And this is even worse with code, as you have to be able to deconstruct the AI’s code and figure out its internal logic. And by the time you’ve done that, it’s easier to just make the code yourself.
It’s not that you can’t verify the work of AI. It’s that if you do, you might as well just create the thing yourself.