Article on popular science: prompt injection attacks are a new risk associated with the interation if llms into other services. •prompt injection attacks imply the use of a prompt that bypass safety restrictions of a given ai / llm, which cannot differentiate between illicit instructions and inputs. •a proper prompt injection attacks can thus use an assistant to interact with a service and complete a sets of instructions. I’d like to hear what you think about this

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    On one hand, this just sounds like it could be mitigated by sanitizing inputs, and creating clear delineation between accesible data and commands. That should be a key part of designing any system capable of taking arbitrary input. Also, customer transaction data should never come through the same pipeline to the LLM as customer queries. It simply should not be possible (through intentional design) for the LLM to interpret that data as commands (outside of the user copy pasting it back into the user facing prompt input). Also, LLMs should never be given direct access to take actions in systems. They should be able to initiate a prompt in another system asking if the user wants to take an action (with confirm/deny). Put the onus on the other system to do proper informing/checking consent, and on the user to understand what they are doing.

    On the other hand, SQL injection is still one of the leading causes of data breaches. If devs can’t consistently sanitize input for SQL queries, therecs no way to trust sanitization of LLM queries and data access, and it would just be that much harder to detect. Plus, companies are likely to prioritize a “seamless experience” instead of having an LLM open up a confirmation/prompt in a different system that the user has to navigate to and confirm. On top of that, expecting anyone to actually read is a fools errand. This shouldn’t be a problem, but it absolutely will be.