To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.
Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.
To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.
Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.