• GiveMemes@jlai.lu
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 months ago

    They could shut down the previous models that were trained on invalid works. Sucks to suck but that’s what you get when you do everything in your power to skirt the law.

    • custard_swollower@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      Yeah, and the same thing would happen if e.g. PII or HIPAA related would end up in trained model. The fact that some PII or health data ended up being publicly available, doesn’t mean that automatically you can process or store such data, and train on such data.

      • RaoulDook@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        This has already been proven by google security researchers who got several of the big “AI” bots to spit out copyrighted materials and PII from their training data sets which the “AI” creators claimed was not stored.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 months ago

          It’s not stored as the full material though. If a human that can sing a copyrighted song is not considered to have a recording of the copyrighted song in their brain, so too are LLMs able to spit out their training data without having to store them.