British spies urged to use artificial intelligence to fight security threats
British spooks will need to use artificial intelligence (AI) to fight a range of threats to national security, according to a report published today.
Intelligence agencies have been urged to use the technology to detect and block cyber attacks, analyse video and audio evidence, and automate administrative tasks.
However, AI is unlikely to predict upcoming threats from potential criminals or terrorists and could not replace human judgement.
The report, published by the Royal United Services Institute (Rusi) think tank and commissioned by GCHQ, was based on access to top-secret British intelligence.
While Rusi promoted the use of AI across the UK’s national security, it also warned that the technology could raise additional privacy and human rights concerns.
It said enhanced policy and guidance would be required to ensure these considerations were reviewed on an ongoing basis.
The report comes amid concerns that the UK faces national security threats from criminals using increasingly sophisticated methods.
Researchers said malicious actors will “undoubtedly” try to use AI to attack the UK, while it was likely that most hostile states were developing or had already developed offensive AI-capable tactics.
Potential threats to political security include the use of deepfake technology to spread disinformation, with the aim of manipulating public opinion or interfering in elections.
The UK could also be vulnerable to so-called polymorphic malware that constantly mutates to evade detection and the automation of social engineering attacks such as phishing to target individuals.
Rusi said threats to physical security were a less immediate concern, but warned the uptake of the internet of things through connected cars and household devices will expose the country to more threats.
Andrew Tsonchev, director of technology at cybersecurity firm Darktrace, said AI would be key both for defending digital networks and boosting privacy.
“Both government agencies and private corporations use AI as a de facto technology specifically to minimise the risk of breaches of privacy, the result of cyber-attacks, by detecting malicious activity within their systems,” he said.
“This means there are less human eyes on raw data, and instead the computer algorithms can handle the process from the detection of an incident through to its resolution autonomously. This is a win for privacy and for security.”
“The modern-day ‘information overload’ is perhaps the greatest technical challenge facing the UK’s national security community,” the report stated.
“The ongoing, exponential increase in digital data necessitates the use of more sophisticated analytical tools to effectively manage risk and proactively respond to emerging security threats.”