Artificial intelligence (AI) is rapidly changing how we work and communicate. Whether you’re fully on board or observing from the sidelines, it’s clear that AI tools are being adopted at a remarkable pace.
With this growth, however, come risks. Security, privacy and trust issues are already emerging, and it’s worth understanding the key challenges now rather than later:
Generative AI can produce content that sounds authoritative but is actually incorrect. These “hallucinations” if left unchecked can lead to poor decisions or the spread of misinformation.
It’s not uncommon for people to paste sensitive company information into public AI tools without realising the risk. Many of these platforms retain user input and use it to improve their models. That means internal, sensitive data could end up being stored, analysed or even surfaced in future outputs.
Attackers can craft inputs that trick an AI system into behaving in ways it wasn’t meant to. This technique, called “jailbreaking”, is especially risky when AI is linked to internal systems like APIs or databases. Another risk is data poisoning, where training data is deliberately manipulated to produce harmful or biased outputs.
As AI becomes part of more business workflows, it’s essential to apply the same rigour to its use as we would with any other technology. Awareness, clear policies and technical safeguards all play a role in ensuring responsible and secure use.
To learn more about managing AI-related risks, refer to the Australian Signals Directorate’s recent (2025) guidance: cyber.gov.au – Engaging with AI
Written by Justin Fielke