The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can’t trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I’ve grown to hate this scenario. I am a subject matter expert on some things and I’ve still had people trying to waste my time to prove their AI hallucinations wrong.
I’ve started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the “author” doesn’t even warn me it’s from an LLM. It’s just allowing bad coders to write bad code faster.
Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn’t the problem, it’s that it was trained on human data – of which people are an unreliable source of information.
AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples. You’re not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you’re not wrong that AI isn’t the problem, but it is certainly a problem and a big one at that.
AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples.
SO DO PEOPLE.
Tell me one of the things that AI does, that people themselves don’t also commonly do each and every day?
Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don’t think so, buddy.
I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.
Of course LLMs make stuff up, they are machines that make stuff up.
Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn’t make someone infallible or even smart. People should question everything they read, regardless of the source.
Blaming the LLM for it is like blaming a search engine for showing bad results.
Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”
To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.
Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can’t trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I’ve grown to hate this scenario. I am a subject matter expert on some things and I’ve still had people trying to waste my time to prove their AI hallucinations wrong.
I’ve started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the “author” doesn’t even warn me it’s from an LLM. It’s just allowing bad coders to write bad code faster.
Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn’t the problem, it’s that it was trained on human data – of which people are an unreliable source of information.
AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples. You’re not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you’re not wrong that AI isn’t the problem, but it is certainly a problem and a big one at that.
SO DO PEOPLE.
Tell me one of the things that AI does, that people themselves don’t also commonly do each and every day?
Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don’t think so, buddy.
But but but . . . !!!
AI!!
I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.
Of course LLMs make stuff up, they are machines that make stuff up.
Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn’t make someone infallible or even smart. People should question everything they read, regardless of the source.
Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”
deleted by creator
To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.
Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.
Joe Rogan doesn’t tell them false domain kowledge 🤷
LOL riiiiiight.
Ok please show me the Joe Rogan episode where he confidently talks BS about process engineering for wastewater treatment plants 🙄