• xthexder@l.sw0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 hours ago

    I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

    This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

      The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.