AI in the workplace raises critical governance and shadow use challenges
A growing number of employees are using AI tools without proper oversight, exposing organisations to data risks, compliance issues, and governance gaps. Experts call for clearer rules and stronger internal controls.
AI adoption in the workplace is accelerating faster than corporate governance frameworks are evolving. Experts warn that many organisations are unprepared for the risks associated with widespread AI use, creating gaps in oversight and accountability.
A study by the University of Melbourne and KPMG found that nearly half of surveyed professionals admitted to misusing AI at work. Many employees also reported witnessing colleagues misuse AI tools, often without formal authorisation.
Standard practices include uploading sensitive company data to public AI platforms, using AI during internal assessments, and presenting AI-generated work as original output. A significant number of employees also reported reducing their effort because they rely on AI assistance.
Experts caution that this trend creates an illusion of productivity and competence. Managers may receive polished reports generated by AI, while employees may not fully understand or verify the content, exposing organisations to poor decision-making, security vulnerabilities, and compliance risks.
Data protection concerns are particularly significant. Feeding confidential or proprietary information into public AI systems can lead to data leakage and legal exposure, especially when misuse results in financial harm or regulatory breaches.
To address these risks, experts recommend clear internal rules, approved AI tools, monitoring of sensitive data flows, and mandatory human oversight in critical processes. Training programmes should focus on practical guidance and reinforce that employees remain responsible for the accuracy and legality of AI-assisted work.
Analysts note that similar patterns emerged during the early stages of internet adoption. As AI use expands, governance frameworks, enforcement mechanisms, and organisational cultures will need to evolve to manage long-term risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
