Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

New research based on 3,000 companies finds a gap between responsible AI commitments and day-to-day operational practice.

UNESCO and Thomson Reuters Foundation logos illustrating a global report on responsible AI governance, accountability, and company practice

A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.

The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.

Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.

The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.

Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.

Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.

Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!