• phx@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    19 hours ago

    Yeah that was my thought. Don’t reject them, that’s obvious and they’ll work around it. Feed them shit data - but not too obviously shit - and they’ll not only swallow it but eventually build up to levels where it compromises them.

    I’ve suggested the same for plain old non-AI data stealing. Make the data useless to them and cost more work to separate good from bad, and they’ll eventually either sod off or die.

    A low power AI actually seems like a good way to generate a ton of believable - but bad - data that can be used to fight the bad AI’s. It doesn’t need to be done real-time either as datasets can be generated in advance

    • SorteKanin@feddit.dk
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      A low power AI actually seems like a good way to generate a ton of believable - but bad - data that can be used to fight the bad AI’s.

      Even “high power” AIs would produce bad data. It’s currently well known that feeding AI data to an AI model decreases model quality and if repeated, it just becomes worse and worse. So yea, this is definitely viable.