New OSI guidelines clarify open source standards for AI

OSI sets transparency standards for AI models, addressing industry and regulatory needs.

OSI, OSAID, open source AI

The Open Source Initiative (OSI) has introduced version 1.0 of its Open Source AI Definition (OSAID), setting new standards for AI transparency and accessibility. Developed over the years in collaboration with academia and industry, the OSAID aims to establish clear criteria for what qualifies as open-source AI. The OSI says the definition will help align policymakers, developers, and industry leaders on a common understanding of ‘open source’ in the rapidly evolving field of AI.

According to OSI Executive Vice President Stefano Maffulli, the goal is to make sure AI models labelled as open source provide enough detail for others to recreate them and disclose essential information about training data, such as its origin and processing methods. The OSAID also emphasises that open source AI should grant users freedom to modify and build upon the models, without restrictive permissions. While OSI lacks enforcement power, it plans to advocate for its definition as the AI community’s reference point, aiming to combat “open source” claims that don’t meet OSAID standards.

The new definition comes as some companies, including Meta and Stability AI, use the open-source label without fully meeting transparency requirements. Meta, a financial supporter of the OSI, has voiced reservations about the OSAID, citing the need for protective restrictions around its Llama models. In contrast, OSI contends that AI models should be openly accessible to allow for a truly open-source AI ecosystem, rather than restricted by proprietary data and usage limitations.

Maffulli acknowledges the OSAID may need frequent updates as technology and regulations evolve. OSI has created a committee to monitor its application and adjust as necessary, with an eye on refining the open-source definition to address emerging issues like copyright and proprietary data.