Digital policy trends in June 2023

The top 3 digital policy trends in June 2023 were formulating guardrails for governing AI, digital identities gaining traction, and companies gearing up for the Digital Services Act (DSA).

 Person, Art, Drawing, Head, Face, Body Part, Hand, Villaño III

Governing AI: What are the appropriate AI guardrails? 

AI governance remains the number one trend in digital policy as national, regional and global efforts to shape AI guardrails continue.

The EU’s risk-based approach

The European Parliament’s approval of the AI Act is a groundbreaking development. This regulation classifies AI systems based on risk levels and safeguards of civil rights, with severe fines for violations. Next in the legislative process is the so-called trialogues, where the European Parliament, the EU Council, and the Commission have to agree on a final version of the act; there are expectations that this agreement will be reached by the end of the year.

A new study from Stanford suggests that leading AI models are still far off of the responsible AI standards set by the AI Act (the version agreed in the EP), notably lacking transparency on risk mitigation measures. But some in the industry argue that the rules impose too heavy a regulatory burden. A recent open letter signed by some of the largest European companies (e.g. Airbus, Renault, Siemens) notes that the AI Act could harm the EU’s competitiveness and could compel them to move out of the EU to less restrictive jurisdictions. Companies are, in fact, doing their best to shape things: For example, OpenAI lobbied successfully in the EU that the forthcoming AI Act should not consider OpenAI’s general-purpose AI systems to be high risk, which would trigger stringent legal requirements like transparency, traceability, and human oversight. OpenAI’s arguments align with those previously employed by the lobbying efforts of Microsoft and Google, which argued that stringent regulation should be imposed only on companies that explicitly apply AI to high-risk use cases, not on companies that build general-purpose AI systems. 

Given the EU’s track record on data protection rules, its proposed AI Act was anticipated to serve as an inspiration to other jurisdictions. In June, Chile’s Parliament initiated discussions on a proposed AI Bill, focusing on legal and ethical aspects of AI’s development, distribution, commercialisation, and use.

More regional rules are in the works: It has been revealed that ASEAN countries are planning an AI guide that will tackle governance and ethics. In particular, it will address the use of AI for generating misinformation online. The guide is expected to be adopted in 2024. Strong dynamism will occur during Singapore’s chairmanship of ASEAN in 2024. 

Business-friendlier approaches

Considering that Singapore itself is taking a collaborative approach to AI governance and is focused on working with businesses to promote responsible AI practices, the ASEAN guide is not likely to be particularly stringent (watch out, EU?). Softer, more collaborative approaches are also expected to be formulated in Japan and the UK, which believe such an approach will help them position themselves as AI leaders. 

Another country that is taking a more collaborative approach to AI governance is the USA. Last month, President Biden met with Big Tech critics from civil society to discuss AI’s potential risks and implications of AI on democracy, including the dissemination of misinformation and the exacerbation of political polarisation. The US Commerce Department will create a public working group to address the potential benefits and risks of generative AI and develop guidelines to effectively manage those risks. The working group will be led by NIST and comprise representatives from various sectors, including industry, academia, and government.


As countries continue their AI race, we might end up with a patchwork of legislation, rules and guidelines that might espouse conflicting values and priorities. It is no surprise that calls for global rules and an international body are also gaining traction. A future global AI agency inspired by the International Atomic Energy Agency (IAEA), an idea first put forward by OpenAI CEO Sam Altman, has garnered support from UN Secretary-General Antonio Guterres

France is advocating for global AI regulation, with President Macron proposing that the G7 and the Organisation for Economic Co-operation and Development (OECD) would be good platforms for this purpose. France wants to work alongside the EU’s AI Act while advocating for global regulations and also intends to collaborate with the USA in developing rules and guidelines for AI. Similarly, Microsoft’s President Brad Smith called for collaboration between the EU, the USA, and G7 nations, adding India and Indonesia to the list, to establish AI governance based on shared values and principles. 

In plain sight: SDGs as guardrails

However, the road to global regulations is typically long and politically tricky. Its success is not guaranteed either. Diplo’s Executive Director Dr Jovan Kurbalija argues that humanity is missing valuable AI guardrails that are in plain sight: the SDGs. They are current, comprehensive, strong, stringently researched, and immediately applicable. They already have global legitimacy and are not centralised and imposing. These are just a handful of reasons why the SDGs can play a crucial role; there are 15 reasons why we should use SDGs for governing AI.

Digital identification schemes gain traction 

Actors worldwide are pushing for more robust, secure and inclusive digital ID systems and underlying policies. 

Businessman using fingerprint identification to access and protecting personal information data

The OECD Council approved a new set of recommendations on the governance of digital identity centred on three pillars. The first addresses the need for systems to be user-centred and integrated with existing non-digital systems. The second focuses on strengthening the governance structure of the existing digital systems to address security and privacy concerns, while the third pillar addresses the cross-border use of digital identity.

Most recently, the EU Parliament and the Council reached a preliminary agreement on the main aspects of the digital identity framework put forward by the Commission in 2021. Previously, several EU financial institutions cautioned that specific sections of the regulation are open to interpretation and could require significant investments by the financial sector, merchants, and global acceptance networks. 

At the national level, a number of countries have adopted regulatory and policy frameworks for digital identification. Australia released the National Strategy for Identity Resilience to promote trust in the identity system across the country, while Bhutan endorsed the proposed National Digital Identity Bill, except for two clauses that await deliberation in the joint sitting of the Parliament. The Sri Lanka Unique Digital Identity Project (SL-UDI) is underway, and the Thai government introduced the ThaID mobile app to simplify access to services requiring identity confirmation.

Content moderation: gearing up for the DSA

Preparations for the DSA are in full swing, even though the European Commission has already faced its first legal challenge over the DSA, and it did not come from Big Tech as many would have expected. German e-commerce company Zalando filed a lawsuit against the Commission, contesting the categorisation of Zalando as a systemic, very large platform and criticising the lack of transparency and consistency in platform designation under the DSA. Zalando argues that it does not meet the requirements for such classification and does not present the same systemic risks as Big Tech. 

Meanwhile, European Commissioner for Internal Market Thierry Breton visited Big Tech executives in Silicon Valley to remind them of their obligations under the DSA. Although Twitter owner Musk previously said that Twitter would comply with the DSA content moderation rules, Breton visited the company headquarters to perform a stress test to evaluate Twitter’s handling of potentially problematic tweets as defined by EU regulators. Breton also visited the CEOs of Meta, OpenAI, and Nvidia. Meta agreed to a stress test in July to assess the EU’s online content regulations, the decision prompted by Breton’s call for immediate action by Meta regarding its content targeting children

 People, Person, Crowd, Adult, Male, Man, Face, Head, Audience, Lecture, Indoors, Room, Seminar, Speech, Thierry Breton
European Commissioner for Internal Market Thierry Breton. Credit: European Commission

The potential of the EU to exert its political and legal power over Big Tech will be demonstrated in the coming months, with the DSA becoming fully applicable in early 2024.