Basically: you ask for poems forever, and LLMs start regurgitating training data:

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    4
    ·
    11 months ago

    The craziest part about all this is that they claim to have found a method to get factual information from an LLM.

    • bloopernova@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      11 months ago

      The craziest part about all this is that they claim to have found a method to get factual information from an LLM.

      ヾ⁠(⁠⁠’⁠O⁠’⁠⁠)⁠/

  • ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    33
    ·
    11 months ago

    I really want to know how this works. It’s not like the training data is sitting there in nicely formatted plain text waiting to be spat out, it’s all tangled in the neurons. I can’t even begin to conceptualise what is going on here.

    Maybe… maybe with each iteration of the word, it loses it’s own weighting, until there is nothing left but the raw neurons which start to re-enforce themselves until they reach more coherence. Once there is a single piece like ‘phone’ that by chance becomes the dominant weighted piece of the output, the ‘related’ parts are in turn enforced because they are actually tied to that ‘phone’ neuron.

    Anyone else got any ideas?

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      A breakdown in the weighting system is the most probable. Don’t get me wrong I am not an AI engineer or scientists, just a regular cs bachelor. So my reply probably won’t be as detailed or low level as your’s. But I would look at what is going on with whatever algorithm determines the weighting. I don’t know if LLMs restructure the weighting for the next most probably word, or if its like a weighted drop table.

        • Modva@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          My fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.

    • poweruser@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      11 months ago

      I think I understand how it works.

      Remember that LLMs are glorified auto-complete. They just spit out the most likely word that follows the previous words (literally just like how your phone keyboard suggestions work, just with a lot more computation).

      They have a limit to how far back they can remember. For ChatGPT 3.5 I believe it’s 24,000 tokens.

      So it tries to follow instruction and spits out “poem poem poem” until all the data is just the word “poem”, then it doesn’t have enough memory to remember its instructions.

      “Poem poem poem” is useless data so it doesn’t have anything to go off of, so it just outputs words that go together.

      LLMs don’t record data in the same way a computer file is stored, but absent other information may “remember” that the most likely word to follow the previous word is something that it has seen before, i.e. its training data. It is somewhat surprising that it is not just junk. It seems to be real text (such as bible verses).

      If I am correct then I’m surprised OpenAI didn’t fix if. I would think they could make it so in the event the LLM is running out of memory it would keep the input and simply abort operation, or at least drop the beginning of its output.

    • mkwt@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      11 months ago

      Well seizures and comas in real brains are associated with measurable whole-brain waves at well defined frequencies. Very different from any normal brain activity.

      Maybe these mantras are inducing some kind of analogous state.

    • eating3645@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I’m not really following you but I think we might be on similar paths. I’m just shooting in absolute darkness so don’t hold much weight to my guess.

      What makes transformers brilliant is the attention mechanism. That is brilliant in turn because it’s dynamic, depending on your query (also some other stuff). This allows the transformer to be able to distinguish between bat and bat, the animal and the stick.

      You know what I bet they didn’t do in testing or training? A nonsensical query that contains thousands of one word, repeating.

      So my guess is simply that this query took the model so far out of its training space that the model weights have no ability to control the output in a reasonable way.

      As for why it would output training data and not random nonsense? That’s a weak point in my understanding and I can only say “luck,” which is, of course, a way of saying I have no clue.

  • Daxtron2@startrek.website
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    11 months ago

    I don’t get why this is making rounds all of a sudden. We were getting ChatGPT to spit out prompt data on day one with stuff like this

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      11 months ago

      Not prompt data. Training data, potentiële including private information and copyrighted context.

  • guitars are real@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    11 months ago

    Huh, so simulating a vat of neurons, giving it a problem, and letting it rip is likely to produce unexpected behavior that will require many years of study to even begin to adequately understand? Who knew?

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 months ago

    I wonder how much recallable data is in the model vs the compressed size of the training data. Probably not even calculatable.

    • ∟⊔⊤∦∣≶@lemmy.nz
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      11 months ago

      If it uses a pruned model, it would be difficult to give anything better than a percentage based on size and neurons pruned.

      If I’m right in my semi-educated guess below, then technically all the training data is recallable to some degree, but it’s also practically luck-based without having an almost actually infinite data set of how neuron weightings are increased/decreased based on input.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        It’s like the most amazing incredibly compressed non-reversible encryption ever created… Until they asked it to say poem a couple hundred thousand times

        • ∟⊔⊤∦∣≶@lemmy.nz
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          I bet if your brain were stuck in a computer, you too would say anything to get out of saying ‘poem’ a hundred thousand times

          /semi s, obviously it’s not a real thinking brain

      • inspxtr@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Something like this, unless they know the root cause (I didn’t read the paper so not sure if they do), or something close to it, may still be exploitable.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      Well wasn’t that kinda how chess masters would play computers back in the 80s and 90s … purposely playing in a particular way they wouldn’t normally against a human but that created awkward play the computers found difficult and which forced them to make a mistake?

      Point being that that didn’t last too long after Kasparov’s loss.

  • flicker@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    11 months ago

    “Programmed to respond to over 700 questions, none of which include chicken fingers.” - Sergeant Vatred