Armitage Archive

Prompt Injection – AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed – such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments. An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.