Skynet who?
There’s no end of sci-fi movies about the end of the world being brought on by self-aware AI tools.
You can relax for now. ChatGPT is no such monster, but it does have an impact on our cyber-resilience.
We’re watching universities and other educators working out how to test students when ChatGPT can do such a good job. That’s because it’s a language model tool. The clue is in the name “Chat Generative Pre-Trained Transformer” and it uses language patterns to create content based on what it has seen before, which is a lot.
For example, ask ChatGPT to write an election manifesto for an environmental group, and you will get a strong, but predictable output. That’s because it’s not creative and it can only repeat content. The nearest it gets is if you have a bit of fun, for example, try “write an election manifesto for an environmental group as a country and western song”. But even then, there are no new ideas.
However, the challenge remains that what has gone before is enough to cause concern.
We are already seeing machine learning being used in cyber-security, to look for adverse patterns and defend against them. There are AI tools in our new KARE Foundation support plans, as part of the end-point protection, monitoring 365 and things like inbound email content.
But can these same technologies be used against us? ChatGPT can write code, and place that skill in the hands of a broader audience, but the current generation is spitting sub-standard and unreliable code, and it’s limited to content prior to 2021. That will change, and while it has a lot to learn, that development is clearly ongoing and accelerating. NIST just put out a paper that lays out the standards for responsible AI. It’s a daunting read, covering the roles of AI actors, the risks of the reliability and bias of the source content, the learning regimes, complexity, privacy and so on. The implication is that there is an emerging discipline around the governance of AI tools such as ChatGPT and more importantly, it’s successors. Not surprisingly, OpenAI, the developers of ChatGPT, are encouraging the debate and have released papers on the topic. The challenge is that most people will skip right over this and just see how they can use it.
(Verse 1) Well, the world’s in a mess, it’s time for a change, This country needs leaders who won’t act so strange, We’re an environmental group, and we’re here to say, We’ll fight for Mother Nature each and every day.
(Chorus) We’re the Green Country Riders, riding for the land, Protecting our planet with a firm, caring hand, Clean air, clean water, that’s what we strive, In this election, let our message thrive.
(Verse 2) We’ll reduce carbon emissions, it’s time to go green…
So can the criminal community harness ChatGPT and similar tools to help them commit their crimes?
They are certainly experimenting just like everyone else. CheckPoint Research published a paper back in January outlining the activity they were observing. AI can peruse huge amounts of data quickly and collate whatever it is directed to, from stolen data on the dark-web to public data and even the content you share with it when you use it.
Even if it isn’t being used for hacking, the opportunity to misuse ChatGPT is your imagination – for example, it can create enough credible content at scale that can be fed into public forums and social media as to potentially influence opinion and maybe even change elections.
Clearly, the work on these tools is accelerating. Even if this generation is still ‘fairly’ harmless, the next generation will be here sooner than we expect and smarter than we can imagine.
Maybe then, voice recognition will finally work!