A great deal of energy, hardware and software went into providing that wrong answer.
We should leave AI to the realm of producing fringe/impossible porn, like it was meant for and like what everyone actually wants from it. All this “search engine” stuff is just cover like when you buy some non-lube products like groceries along with the tube of astroglide at 1:00 AM.
If you read the whole thing, it’s not wrong. It just highlighted a part that is wrong when taken out of context
What you’re referring to as “highlighting” here is what most of us consider the thing “answering the question”.
“Where are you from?”
“Connecticut. I was born and raised in Utah …”
That first sentence is the answer to the question.
AI is statistically generated word salad.
Yah I’m so happy every major internet and tech company is deciding to deliberately power every system we use with random word salad generators, there’s no chance will cause any problems.
So is human speech
No it fucking isn’t lol
Kind of depends on how much nuts you add to yours.
I don’t put any nuts in my mouth, thank you.
They’re the most reliable source of protein. You can crush them up and make a milk out of them too.
I thought this was fake or a bad result or something, but totally just duplicated it. Wow.
If you read the block of text…. It doesn’t make sense either.
I expect if you follow the references you’d find one of them to be one of those “if Earth was a grain of sand” analogies.
People like laughing at AI but usually these silly-sounding answers accurately reflect the information the search returned.
It’s in the quote that they scaled it.
The point is that the entire alleged value is the ability to parse the reading material and extract the key points, but because it doesn’t resemble intelligence in any way, it isn’t actually capable of meaningfully doing so.
Yes, not being able to distinguish between the real answer and a “banana for scale” analogy is a big problem that shows how fucking useless the technology is.
Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.
There’s a nice phrase I commonly use, “don’t let the perfect be the enemy of the good.” These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don’t have to be as long as you’re prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.
No, it isn’t.
You’re allowing a simple tool with literally zero reading comprehension to do your reading for you. It’s not surprising your understanding of what the tech is is lacking.
Your comment is simply counterfactual. I do indeed find LLMs to be useful. Saying “no you don’t!” Is frankly ridiculous.
I’m a computer programmer. Not directly experienced with LLMs themselves, but I understand the technology around them and have written program that make use of them. I know what their capabilities and limitations are.
Your claim that it’s capable of doing what it claims isn’t just false.
It’s an egregious, massively harmful lie, and repeating it is always extremely malicious and inexcusable behavior.
I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I’ve talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.
Go ahead and refrain from using them yourself if you really don’t want to, for whatever reason. But exclaiming “no it doesn’t!” In the face of them actually doing the things you say they don’t is just silly.
It propably grabbed the info off some random number-confusing dude like me, who recently posted the Earth’s diameter would be about 6 km instead of 6000.
Edit: oops, did it again. Meant radius, not diameter…
Like every tool, it has its uses…but they are not those being advertised. LLMs are great for things where mistakes don’t detract from the result (or even add to it) like brainstorming, art, music, disinformation…all that good stuff.
That’s what I think too. AI is mainly useful for things that don’t have right or wrong answers.
Although this incorrect answers is obvious, what about all the times where an incorrect answer from AI is not obvious?
@Gsus4 @btaf45 That’s true for AI that has been trained for the general public to provide an answer for any provided question meaning they are forced to respond to a prompt even though they are wrong and maybe even know they are wrong. They just don’t know the answer and can’t say that because it’s commercially bad.
I do believe that for scientific research AI models are much more precise because they have been trained with the right datasets and are tasked with answering specific questions.
So, AI is suited to be a CEO or in marketing…
@jj4211 For sure. I’d even day it is more suited to be a CEO than it is to do specialised work.
brainstorming
Sure thing, but have to remember to include “no bad ideas” in the prompt for best results.
that’s the point of brainstorming, all ideas are allowed, filter later.
google > bing
The 4th dimension shortcut