Dear readers,
The drafting of the EU’s Code of Practice for general-purpose AI (GPAI) signals a crucial moment in European AI regulation and a global benchmark for managing innovation and risk. Leading academics, including AI pioneer Yoshua Bengio, are at the centre of this initiative, tasked with weaving a framework that balances transparency, safety, and innovation. The cast of academics, from seasoned professors to PhD candidates, showcases the EU’s desire to root its AI regulation in deep technical and legal expertise. Yet, as polished as this effort appears, questions linger about its timing and inclusivity—critical voices from industry and civil society are already showing signs of divergence.
The EU AI Act, hinging significantly on this Code of Practice, will not see final standards before 2026. Thus, this interim period, overseen by academic chairpersons, holds immense weight. While the presence of global AI figures like Bengio underscores the Code’s gravitas, the timing of their appointment, just after Parliament’s intervention, leaves a slightly bitter aftertaste. The process could have benefited from earlier transparency, with the ‘pity’ expressed by digital policy advisors reflecting broader concerns about the bureaucratic backlog. But there is no doubt about the intellectual firepower gathered here: the mix of AI technical savants, legal minds, and governance experts is the EU’s bet on building a human-centered and safe AI future.
Yet, the road ahead is bumpy. The first plenary, attended by nearly 1,000 stakeholders, unveiled the deep fault lines between general-purpose AI providers—like ChatGPT’s creators—and other participants. The latter, which includes civil society and academia, overwhelmingly pushed for stringent transparency on training datasets, supporting the inclusion of licensed content, open data, and even web-scraped material. GPAI providers, however, were notably less enthusiastic, baulking at demands for greater data disclosure, mainly when it came to open datasets. Their preference for self-policed data transparency, rather than third-party audits, exposes a friction between innovation-driven autonomy and regulation-enforced accountability.
While academia and civil society rally behind risk assessment and strict audit trails, providers shy away from measures they deem overly stringent. Perhaps this is the core tension of the GPAI Code: can a framework fuel cutting-edge AI development and satisfy the public’s call for ethical safeguards? The European Commission’s ongoing consultation shows the battle for compromise is still in its early stages. With over 430 responses already collected, there is a palpable risk that the sheer diversity of opinions could derail progress, a possibility echoed by those close to the drafting process.
Creating this Code of Practice feels like a high-stakes balancing act. On the one hand, there is pressure to protect against AI’s ‘black box’ nature, ensuring transparency and responsibility. Conversely, the EU must remain competitive in AI, not shackling its innovators with undue restrictions. The stakes could not be higher. As Bengio puts it, this Code will have to stand the test of time, and not just be watched closely by Europe.
In other news, the Department of Government Efficiency (DOGE) token witnessed a staggering rise of over 33,000% in September before stabilising at approximately USD 0.02309. The surge was triggered by a playful comment from Elon Musk after a discussion with Donald Trump, who floated the idea of establishing a new government efficiency department, with Musk potentially at its helm if Trump wins the upcoming election. Amidst a closely contested race between Trump and Kamala Harris, meme coins, including politically themed tokens like DOGE, are seeing a resurgence, with trading volumes surging to over USD 10 million in 24 hours.
Marko and the Digital Watch team
Highlights from the week of 27-4 October 2024
The first draft of the EU AI Code is expected by November, with finalisation planned for 2025.
At the 79th UN General Assembly, 18 nations endorsed a joint statement emphasising the critical importance of securing undersea cable infrastructure, highlighting the need for policies that ensure its resilience,…
The EU seeks to understand how these platforms’ algorithms could influence civic discourse, mental health, and child protection.
Analysts predict a new cryptocurrency supercycle, driven by the resurgence of meme coins and politically themed tokens like MAGA and ConstitutionDAO.
Russia’s digital ministry confirms Google’s account creation restrictions and warns users to back up data and consider alternative two-factor authentication methods.
Concurrently, the US is enhancing financial and technological support for allies like Israel, which raises ethical concerns amid ongoing regional conflicts.
Companies offer advanced AI training, including quantum physics.
The initiative aims to improve participation in the digital economy, telehealth, and distance learning, with grant applications open until 7 February 2025.
X is likely to pay the fines but may challenge an additional $1.8 million penalty imposed by Brazil’s Supreme Court after a brief platform reappearance.
The project is expected to benefit local fishermen, tourism, shipping, and marine research, ultimately unlocking new economic opportunities for local communities.
Reading corner
The conceptual and terminological confusion surrounding the use of “digital,” “cyber,” and “tech” diplomacy has practical consequences, as highlighted by a recent US Government Accountability Office report, which identifies this ambiguity as one of a major barrier to effective cyber and digital diplomacy. The key takeaway is that clarity in terminology is crucial, not only for clear communication but also for effective diplomatic action, underscoring the importance of understanding the context in which these terms are used.