MIT research highlights embedded and enacted risks in AI
Advanced AI tools like RAG and autonomous agents can improve results but increase exposure to sensitive data.
Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.
Researchers identify two types of risk.
Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.
Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.
Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.
To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
