New study suggests leading AI models fall short of responsible AI standards set by EU AI Act

The research compared 10 models and found most companies were far from compliant. Transparency on risk mitigation measures was particularly lacking.

Artificial Intelligence: Technology, Governance, and Policy Frameworks online course

A recent study conducted by Stanford researchers has found that major AI models are falling short of the responsible AI standards set by the EU’s Artificial Intelligence Act. The study compared 10 prominent AI models with the requirements outlined in the draft EU law and discovered that most companies were not meeting the necessary criteria.

Among the models assessed, Hugging Face’s BLOOM received the highest score of 36 out of 48, while ChatGPT, a market leader, ranked in the middle. In contrast, Aleph Alpha, a German AI company, scored only 5 out of 48, and Anthropic, which is supported by Google, managed 7 points. Transparency was found to be higher among open source models, while closed or proprietary models demonstrated stronger performance in terms of risk mitigation, but still leaving significant room for improvement.

The research emphasises the lack of transparency from AI providers regarding their risk mitigation measures, which hampers the assessment of potential risks associated with foundation models. The researchers stressed the importance of implementing and enforcing the EU AI Act to instigate positive changes within the AI ecosystem. In conclusion, the study highlights the unpreparedness of AI providers to adhere to forthcoming regulations that necessitate disclosure and risk mitigation.