The Implications of Using ChatGPT (free gen-AI tools): An instructive Case Study from Australia

by | Oct 23, 2024 | AI, News, Security, TIPS TRICKS AND HINTS

Recent news from Australia

In a recent investigation, the privacy regulator in the state of Victoria has imposed a ban on the use of ChatGPT within a government department.

This case highlights the dual nature of Generative AI (GenAI) tools, which offer significant benefits but also pose risks if not managed with robust policies, training, and education.

Recent news from Australia

In a recent investigation, the privacy regulator in the state of Victoria has imposed a ban on the use of ChatGPT within a government department.

This case highlights the dual nature of Generative AI (GenAI) tools, which offer significant benefits but also pose risks if not managed with robust policies, training, and education.

The Incident

The investigation by the Office of the Victorian Information Commissioner (OVIC) focused on a Protection Application Report (PA Report) prepared by a Child Protection worker at the Department of Families, Fairness and Housing (DFFH). The report, submitted to the Victorian Children’s Court, contained overly sophisticated language and inaccurate information, including inconsistent references to a child’s doll. This led to a misrepresentation of the child’s risk factors, downplaying the severity of potential harm.

Privacy Concerns

The primary concerns were twofold:

  1. Release of Sensitive Information: The free version of ChatGPT, used to draft the report, disclosed sensitive personal information to OpenAI, an offshore company. This information was then outside the control of the Department.
  2. Inaccurate Content: The content generated by ChatGPT included inaccuracies, which in this case, downplayed the risks to the child involved.

OVIC’s Findings

OVIC’s investigation under the Privacy and Data Protection Act 2014 (Vic) concluded that DFFH breached Information Privacy Principles (IPPs) by:

  • Failing to mitigate risks that ChatGPT would collect, use, and disclose inaccurate personal information.
  • Failing to prevent unauthorized disclosure of personal information.

OVIC criticized DFFH’s policies and protections, finding them insufficient to ensure compliance with the IPPs. Consequently, OVIC issued Compliance Notices requiring DFFH to block access to GenAI tools like ChatGPT for Child Protection staff and to regularly scan for similar tools

Implications for New Zealand

Under New Zealand’s Privacy Act 2020, a similar outcome would likely occur. The equivalent privacy principles in New Zealand emphasize the need for agencies to ensure personal information is accurate and protected against unauthorized access and misuse.

Key Takeaways

For New Zealand organizations, this case underscores the importance of robust policies, education, and training on GenAI use.

Effective education should extend beyond managers and leaders to the general workforce, and specific departmental rules and technical controls should be in place to manage the use of GenAI tools.

Free, public GenAI tools like ChatGPT learn by using the content in their interactions, and they add that to their learning database.  (Known as LLMs or Large Language Models).  Commercial GenAI tools such as the paid-for version of Microsoft Copilot commit to preserve data privacy and not share content with public language libraries (LLMs).

Implications for New Zealand

Under New Zealand’s Privacy Act 2020, a similar outcome would likely occur. The equivalent privacy principles in New Zealand emphasize the need for agencies to ensure personal information is accurate and protected against unauthorized access and misuse.

Key Takeaways

For New Zealand organizations, this case underscores the importance of robust policies, education, and training on GenAI use.

Effective education should extend beyond managers and leaders to the general workforce, and specific departmental rules and technical controls should be in place to manage the use of GenAI tools.

Free, public GenAI tools like ChatGPT learn by using the content in their interactions, and they add that to their learning database.  (Known as LLMs or Large Language Models).  Commercial GenAI tools such as the paid-for version of Microsoft Copilot commit to preserve data privacy and not share content with public language libraries (LLMs).

Conclusion

This case serves as a cautionary tale for organizations using GenAI tools. It highlights the need for comprehensive policies, ongoing education, and stringent controls to manage the risks associated with these technologies.

Does your organisation have a clear policy on the use of GenAI tools?  This would include scoping how they can be used, the acceptable activities, verification of outputs and the acceptable GenAI tools to use.

Ultimately, like all tools, genAI can help workers be more productive, getting work completed more quickly and to a higher standard.  It just needs to be used properly.

For advice on these issues, please get in touch. We would be delighted to assist.