GCHQ warns of emerging security threat posed by AI-chatbots
By Laura Enfield | 15th March 2023
Cheltenham-based spy agency GCHQ has warned of the emerging security threat posed by ChatGPT and other AI-powered chatbots.
In an advisory note published on Tuesday the National Cyber Security Centre (NCSC) said the companies behind them are able to read and store queries typed into and use them for future versions.
Released in late 2022, ChatGPT is one of the fastest growing consumer applications ever, thanks to the ease of querying it provides.
Developed by OpenAI, a US tech startup. It's based on GPT-3, a language model released in 2020 that uses deep learning to produce human-like text, but the underlying LLM (Large language models) technology has been around much longer.
Cyber security experts from NSCS, a GCHQ agency, warned the technology can 'hallucinate' incorrect facts, be biased, gullible and "coaxed into creating toxic content".
The note also cautioned that curious office workers experimenting with chatbot technology could reveal sensitive information through their search queries.
"The query will be visible to the organisation providing the LLM (so in the case of ChatGPT, to OpenAI). Those queries are stored and will almost certainly be used for developing the LLM service or model at some point.
"This could mean that the LLM provider (or its partners/contractors) are able to read queries, and may incorporate them in some way into future versions.
Experts also said there is a risk criminals might use LLMs to help with cyber attacks beyond their current capabilities.
If an attacker is struggling to escalate privileges or find data, they might ask an LLM, and receive an answer that's not unlike a search engine result, but with more context
They may also use the technology to write convincing phishing emails, in multiple languages.
The note concluded: "It's an exciting time for LLMs, and ChatGPT in particular has gripped the world's imagination.
"As with all technology developments, there will be people keen to use it and to investigate what it has to offer, and those who may never use it.
"There are undoubtedly risks involved in the unfettered use of public LLMs, as we've outlined above. Individuals and organisations should take great care with the data they choose to submit in prompts.
"You should ensure that those who want to experiment with LLMs are able to, but in a way that doesn't place organisational data at risk."
Copyright 2023 Moose Partnership Ltd. All rights reserved. Reproduction of any content is strictly forbidden without prior permission.