• custard_swollower@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    If you do stuff, earn from it, and ignore parties and their rights, you are forced to compensate. I guess it will be peanuts though.

    • GiveMemes@jlai.lu
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      They could shut down the previous models that were trained on invalid works. Sucks to suck but that’s what you get when you do everything in your power to skirt the law.

      • custard_swollower@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Yeah, and the same thing would happen if e.g. PII or HIPAA related would end up in trained model. The fact that some PII or health data ended up being publicly available, doesn’t mean that automatically you can process or store such data, and train on such data.

        • RaoulDook@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          This has already been proven by google security researchers who got several of the big “AI” bots to spit out copyrighted materials and PII from their training data sets which the “AI” creators claimed was not stored.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            It’s not stored as the full material though. If a human that can sing a copyrighted song is not considered to have a recording of the copyrighted song in their brain, so too are LLMs able to spit out their training data without having to store them.