Blaming the LLM for it is like blaming a search engine for showing bad results.
Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”
Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”