If you’ve scrolled through LinkedIn or kept tabs on tech industry news recently, there’s one topic we’re sure you’ve seen discussed often: generative AI.
More specifically, you’ve likely heard of AI models such as ChatGPT, which is taking the world by storm. By harnessing artificial intelligence and machine learning, these tools can answer questions and produce text and images in moments at little to no cost, presenting exciting possibilities for automation in business.
However, like any emerging technology, ChaptGPT and other AI models aren’t without their flaws or limitations — especially when it comes to cyber security.
So, how do these tools work — and how can users ensure they don’t fall foul of the risks they could present?
Paving the way for automation
There are several generative AI applications on the market, each offering attractive capabilities for end users.
For example, ChatGPT is an AI chatbot developed by US tech start-up OpenAI using GPT-3 — a large language model (LLM) that harnesses deep learning to produce human-like text.
ChatGPT’s algorithm is trained on large volumes of text-based data from across the internet. When prompted, it’ll analyse the relationships between different words and respond based on what it finds — creating anything from scientific papers and legal documents to essays and social media captions.
Having launched only in November 2022, ChatGPT has gained impressive traction in a short period, leading many other companies to produce their own LLM tools — including Google’s Bard chatbot, released in March 2023.
And it’s not just text that generative AI can produce. Several tools, such as GitHub Copilot, have coding capabilities, and others can create images and videos on demand. For example, in April 2022, OpenAI released DALL-E 2 — the latest iteration of its image and art generation platform, offering improved resolution and tighter safety regulations than its predecessor.
These inexpensive applications have captured the imagination of people worldwide, taking us closer to the automated future we’ve seen depicted in science fiction. Although generative AI is still in its infancy and often produces imperfect results, it could begin taking on mundane tasks like email writing and admin, saving business resources and potentially even replacing some human jobs.
So, what’s the catch?
Despite its advantages, generative AI still has its drawbacks. Most significantly, in the rush to embrace the latest technology, one key factor could make or break the successful rollout of these tools: cyber security.
Understanding the risks of generative AI
Generative AI is rapidly going mainstream, with plenty of users keen to test it out and make the most of opportunities to make their lives easier.
However, the rising popularity of ChatGPT and other generative AI models has raised eyebrows regarding security. Why?
First and foremost, AI can be used to impersonate human conversations — presenting a new platform for cyber criminals to create bots and target people with clever social engineering scams.
For example, ChatGPT could be used to send more convincing phishing emails — one of the most common cyber attacks involving scammers imitating trusted organisations or individuals to dupe victims into sharing personal information or clicking malicious links.
These tools can also generate malware, spreading viruses and leading to irreversible system damage and data loss. It would take a real malware expert to use an LLM tool in this way, but it’s certainly possible — and will become increasingly so as these models grow more sophisticated.
What’s more, there are concerns over the collection and storage of prompts that users submit to AI tools. Not only are queries visible to the company that owns the LLM platform, presenting issues with data privacy, but these queries could also be hacked or leaked — with potentially disastrous implications should they contain personally identifiable information (PII).
Of course, AI developers are considering cyber security when designing these tools — from encrypting data and restricting access to establishing incident response plans and data handling best practices.
Still, there aren’t currently any specific regulations governing ChatGPT or other AI chatbots and systems, meaning using them isn’t without risk.
So, what can users do to avoid online threats whilst making the most of public LLMs’ advantages?
Stepping cautiously into the future
It’s clear that as it becomes increasingly tricky to determine what’s been produced by a human and what’s the work of computers, generative AI could present several cyber security threats.
But that doesn’t mean we should write off the idea altogether. By following these straightforward IT security practices and staying vigilant, you can minimise risk and continue experimenting with this exciting new technology…
Keep private information private
Avoid sharing PII with generative AI tools and never (we repeat — NEVER) hand out data such as names, addresses, bank details or account numbers unless you’re sure you’re talking to a legitimate contact.
You may also want to use pseudonyms to remain anonymous when interacting with AI chatbots. After all, it’s better to be safe than sorry — and you wouldn’t want to expose sensitive business data to the wrong people unintentionally…
Stay up to date with security policies
Before you or your company use generative AI for business purposes, review the service’s privacy and data retention policies first.
Doing so will ensure you’re in the loop with how your conversations will be stored and used (and for how long), which may influence how you choose to navigate it.
Remain suspicious of unfamiliar messages
As generative AI tools become more refined, detecting a potential scam will become more challenging.
So, never open or respond to communications from an unknown source and keep an eye out for any irregularities in the sender’s address or website that could indicate fraudulent behaviour. And remember: if you do think you’ve been targeted by a scam, report it to your IT provider and the National Cyber Security Centre.
Implement robust IT security measures
In the unfortunate event that you or your business does succumb to an AI-generated attack, it’s crucial to be prepared.
Ensure all your anti-virus software and operating systems are updated to avoid presenting vulnerabilities to hackers. Plus, always back up your critical data to escape paying a ransomware fine to recover it.
You should also implement access security measures such as multi-factor authentication to provide additional protection — and establish a suitable incident response plan to ensure you’re equipped with the correct processes to bounce back from an AI-driven breach.
Interested in safely embracing automation within your small to medium-sized business? Contact our team at 0800 988 2002 to discover how our range of tailored cloud solutions and cyber security services can help.