While I think this is a bit overblown in sensationalism, any company that allows user generative AI, especially as open as using LoRas and any amount of checkpoints, needs to have very good protection against synthetic CSAM like this. To the best of my knowledge, only the AI Horde has taken this sufficiently seriously until now.

  • starbreaker@kbin.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    7
    ·
    11 months ago

    I suspect that Marc Andressen is fine with AI-generated kiddie porn. He probably tells himself that no children were actually harmed. Never mind that the AI had to get its training data somewhere…

    • db0@lemmy.dbzer0.comOPM
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 months ago

      To be fair, the AI doesn’t have to see CSAM to be able to generate CSAM. It just has to understand the concept of child and the various lewd concepts, and it can then mix them together.

      • starbreaker@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        8
        ·
        11 months ago

        Which only further supports my opinion that no programmer should be considered employable until they’ve read and understood Mary Shelley’s Frankenstein, and understood that they are in Frankenstein’s position.

    • CJOtheReal@ani.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      The AI is able to merge images, you can definitely merge child pics with “normal” porn. So technically you can make “CSAM” without having actual CSAM in the training data. The first versions of some of the image AIs have been able to do that, and there was definitely no CSAM in the training data.

      Its not good to have that around, but definitely better than actual CSAM.