Armitage Archive

Shadow AI – Should I Be Worried?

by ByAlastair Paterson

Original article

This page contains highlights I saved while reading Shadow AI – Should I Be Worried? by ByAlastair Paterson. These quotes were collected using Readwise.

Highlights

Instead, the emphasis needs to be on educating and guiding users regarding responsible AI use whereby security teams truly understand the needs of end users and what they are looking to achieve.

Permalink to this highlight


Prompt Injection – AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed – such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments. An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.

Permalink to this highlight


As we have seen with 'Shadow IT' – if employees believe there are productivity gains to be made by using their own devices or third party services then they'll do it

Permalink to this highlight


Right now there are some 12,000 AI tools available promising to help with over 16,000 job tasks and we're seeing this number grow at around 1,000 every month.

Permalink to this highlight


Most of the 12,000 tools listed above are in the main thin veneers over ChatGPT with clever prompt engineering designed to make them appear differentiated and aligned to help with a specific job function. However, unlike ChatGPT which at least has some safeguards against data protection, these GPTs offer no defence about where company data will ultimately end up – it can still be sent to any number of spurious third party websites with unknown security controls.

Permalink to this highlight


In the UK alone [Deloitte](https://www.peoplemanagement.co.uk/article/1830277/one-million-people-uk-used-generative-ai-work-study-finds#:~:text=One%20in%2012%20(8%20per,believe%20their%20employer%20would%20approve.) has found that 1m employees (or 8% of the adult workforce) have used GenAI tools for work. And only 23% of this number believe their employer would have approved of this use. This indicates that either their employer doesn't have a policy about using AI safely or they're just ignoring it in the hope that the perceived productivity gain is worth the risk.

Permalink to this highlight


Many firms have chosen to block AI outright, but this approach is ineffective at best and counterproductive at worst. Overzealous policies and blanket bans on AI tools risk forcing users underground to use unknown tools with unknown consequences. It can also mean that firms miss out on the huge productivity gains that are made possible by using GenAI apps safely.

Permalink to this highlight


We analyzed the most popular GPTs and found that forty percent of the apps can involve uploading content, code, files, or spreadsheets for AI assistance. This raises some data leakage questions if employees are uploading corporate files of those types, such as:

• How do we know that the data does not contain any sensitive information or PII? • Will this cause compliance issues?

Permalink to this highlight


Want more like this? See all articles or get a random quote.