• Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    6 hours ago

    There was a paper about this not long ago. The problem is, how LLMs get trained: a right answer gets a point, everything else gets no points. This rewards guessing (produces a point sometimes) over answering “I don’t know/I can’t do this” (produces never a point)

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      50 minutes ago

      It’s like when developers give a wrong answer during technical interviews, rather than say “I’d have to look it up” or “I’d have to check the documentation” etc.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    7 hours ago

    This is actually a pretty great way to illustrate what LLMs do. It gives you an answer regardless of whether it makes sense to do so.

    • Lulzagna@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 hours ago

      This is a dumb misconception. High emissions and energy consumption is when training models, not during prompts

      • Kilgore Trout@feddit.it
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        58 minutes ago

        False. It’s been shown that resolving prompts also drives a major energy consumption, albeit maybe not so higher than regular search queries.

    • Galapagon@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 hours ago

      I think you should be more concerned about the automatic ai responses on every other search, instead of people having a bit of fun with these

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 hours ago

      AI drives 48% increase in Google emissions

      That’s not even supported by the underlying study.

      Google’s emissions went up 48% between 2019 and 2023, but a lot of things changed in 2020 generally, especially in video chat and cloud collaboration, dramatically expanding demand for data centers for storage and processing. Even without AI, we could have expected data center electricity use to go up dramatically between 2019 and 2023.

  • Evil_Shrubbery@thelemmy.club
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    12 hours ago

    It won’t explain:
    - “two eggplants in one pot situation” meaning
    - “the 7 fucks from 7 barren fields” meaning
    - “like a stuffed beaver in a museum” meaning
    - “better a dick tater on plate than a diddler on the roof” meaning
    - “two winds is one too many farts in a storm” meaning

    Can someone organically semi-intelligent explain these to me, please??
    (It would be so embarrassing if I’m using these phrases wrong.)

    Ducky Ducky Go Go Go:

    • DeathByBigSad@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      11 hours ago

      “two eggplants in one pot situation”
      two girls one cup, but its two guys

      “better a dick tater on plate than a diddler on the roof”
      better eat trump for dinner than defenestrate epstein

      I did it!

      (wait, am I ChatGPT? 🤔🤖)

    • sunbytes@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 hours ago

      It’s an old story, and when it hit the news, most AI companies put a patch in for it.

      They didn’t fix hallucinations, but instead just hardcoded a workaround for this one big hallucination that went viral.

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    12 hours ago

    to be fair, I’d assume the same meaning. Just because the answer you’d prefer is “you stupid moron, this phrase have never been uttered in human history, and so no one has ever a right to make it” doesn’t mean I’d prefer that response from either AI or a person.

    • Jason@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 hours ago

      The point of the made up saying is to demonstrate that LLMs are just yes men.

      What if you mishear a saying or phrase you hear in the street? You ask an LLM and it confidently tells you some BS about what it means, how often it’s used, and it’s origins. Even providing sources that lead to, at best, unrelated phrases.

      How is that helpful?

      A real person would likely say, “I’ve never heard that one before” rather than try to gaslight you (hopefully!)

  • fakeman_pretendname@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    13 hours ago

    Duckduckgo’s AI:

    “The phrase “a lady butterfly meme a day keeps the doctor away” is a playful twist on the well-known saying “an apple a day keeps the doctor away.” This original phrase suggests that eating an apple daily can contribute to good health and potentially reduce the need for medical attention.”

    “In this variation, the mention of “lady butterfly meme” likely refers to a specific meme or a lighthearted, humorous image that brings joy or laughter. The implication is that engaging with fun or uplifting content, like memes, can have a positive effect on one’s mental health and overall well-being, similar to how healthy eating is associated with physical health.”

    “Overall, it emphasizes the importance of humor and positivity in daily life as a way to maintain good health, particularly mental health.”

  • MacN'Cheezus@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    14 hours ago

    Broke: playing ding-dong-ditch or making prank phone calls

    Woke: getting the Google AI to hallucinate the origin story of made up idioms

  • Wolf@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    14 hours ago

    I can confirm it works for DuckDuckGo as well.

    I love how it even lists ‘sources’ lol (I checked, that phrase does not appear in either “source”)