IPC New South Wales’ Generative AI guidance targets privacy risks in Australia

The guidance warns NSW agencies about privacy, data breach, and offshore processing risks of using generative AI tools.

Generative AI privacy guidance from Australia's Information and Privacy Commission NSW on public sector data risks, safeguards, and human review

The Information and Privacy Commission New South Wales, has issued guidance for public sector agencies in Australia on managing privacy risks associated with the use of generative AI tools.

The guide states that the Privacy and Personal Information Protection Act 1998 applies to the handling of personal information through generative AI tools. It is intended to help agencies understand and comply with privacy obligations when adopting tools such as ChatGPT, Gemini, Claude, Perplexity, and Copilot.

Generative AI can support workplace tasks such as drafting, editing, document analysis, research, translation, transcription, and process automation. However, the IPC warns that these tools can create privacy risks when prompts, uploaded files, or outputs include personal or health information.

The guide highlights risks including unexpected use or disclosure of personal information, cross-border data transfers, unauthorised disclosure, data breaches, extended retention of personal information, generation of new personal information, inaccurate or discriminatory outputs, and loss of transparency or data subject control.

Some generative AI providers may collect customer data, including prompts, uploaded files, and outputs, to train or improve their models, according to the IPC. Agencies should assess whether personal or health information uploaded to a generative AI service may be processed offshore or used for purposes beyond the original collection purpose.

Recommended measures include privacy impact assessments, updates to privacy management plans and data breach response policies, clear public notices, consent where required, acceptable use policies for staff, training, pre-deployment testing, third-party vendor assessments, and data residency in Australia where possible.

Human review is also presented as an important safeguard, especially where generative AI outputs inform decisions affecting individuals’ access to services, opportunities, or benefits. The IPC urges agencies to avoid a ‘set and forget’ approach and continuously monitor generative AI use, governance, culture, and emerging privacy risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!