Home | Newsletters & Shorts | DW Weekly #136 – 13 November 2023

DW Weekly #136 – 13 November 2023

 Text, Paper, Page

Dear all,

The ongoing Middle East conflict has made us realise how dangerous and divisive hate speech can be. With illegal content on the rise, governments are putting on pressure and launching new initiatives to help curb the spread. But can these initiatives truly succeed, or are they just another drop in the ocean?

In other news, policymakers are working towards semantic alignment in AI rules, while tech companies are offering indemnity for legal expenses related to copyright infringement claims originating from AI technology.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governments ramp up pressure on tech companies to tackle fake news and hate speech

Rarely have we witnessed a week quite like the last one, where so much scrutiny was levelled at social media platforms over the rampant spread of disinformation and hate speech. You can tell that leaders are worried about AI’s misuse by terrorists and violent extremists for propaganda, recruitment, and the orchestration of attacks. The fact that so many elections are around the corner raises the stakes even more.

Christchurch Call. In a week dominated by high-stakes discussions, global leaders, including French President Emmanuel Macron and former New Zealand leader Jacinda Ardern, gathered in Paris for the annual Christchurch Call meeting. The focal point was a more concerted effort to combat online extremism and hate speech, a battle that has gained momentum since the far-right shooting at a New Zealand mosque in 2019.

Moderation mismatch. In Paris, Macron seized the opportunity to criticise social media giants. In an interview with the BBC, he slammed Meta and Google for what he termed a failure to moderate terrorist content online. The revelation that Elon Musk’s X platform had only 2,294 content moderators, significantly fewer than its counterparts, fueled concerns about the platforms’ efficacy.

UNESCO’s battle cry. Meanwhile, UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling it a ‘major threat to stability and social cohesion’. UNESCO unveiled an action plan (in the form of guidelines), backed by global consultations and a public opinion survey, emphasising the urgent need for coordinated action against this digital scourge. But while the plan is ambitious, its success hinges on adherence to non-binding recommendations. 

Political ads. On another front, EU co-legislators reached a deal on the transparency and targeting of political advertising. Stricter rules will now prohibit targeted ad-delivery techniques involving the processing of personal data in political communications. A public repository for all online political advertising in the EU is set to be managed by an EU Commission-established authority. ‘The new rules will make it harder for foreign actors to spread disinformation and interfere in our free and democratic processes. We also secured a favourable environment for transnational campaigning in time for the next European Parliament elections,’ lead MEP Sandro Gozi said. In the EU’s case, success hinges not on adherence, but on effective enforcement. 

Use of AI. Simultaneously, Meta, the parent company of Facebook and Instagram, published a new policy in response to the growing impact of AI on political advertising (after it was disclosed by the press). Starting next year, Meta will require organisations placing political ads to disclose when they use AI software to generate part or all of those ads. Meta will also prohibit advertisers from using AI tools built into Meta’s ad platform to generate ads under a variety of categories, including housing, credit, financial services, and employment. Although we’ve come to look at self-regulation with mixed feelings, the new policy – which will apply globally – is ‘one of the industry’s most significant AI policy choices to come to light to date’, to quote Reuters.

Crack-down in India. Even India joined the fray, with its Ministry of Electronics and Information Technology issuing a stern statement on the handling of misinformation. Significant social media platforms with over 5 million users must comply with strict timeframes for identifying and deleting false content.

As policymakers and tech giants grapple with the surge of online extremism and disinformation, it’s clear that much more needs to happen. The scale of the problem demands a tectonic change, one that goes beyond incremental measures. The much-needed epiphany could lie in the shared understanding and acknowledgement of the severity of the problem. While it might not bring about an instant solution, collective recognition of the problem could serve as a catalyst for a significant breakthrough.


Digital policy roundup (6–13 November)

// AI //

OECD updates its definition of AI system

The OECD’s council has agreed to a new definition of AI system, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’

Compared with the 2019 version, it has added content as one of the possible outputs, referring to generative AI systems. 

Why is it relevant? First, the EU, which aligned its AI Act with the OECD’s 2019 definition, is expected to integrate the revised definition into its draft law, presently at trilogue stage. As yet, no documents reflecting the new definition have been published. Second, the EU’s push towards semantic alignment extends further. The EU and USA are currently working on a common taxonomy, or classification system, for key concepts, as part of the EU-US Trade and Technology Council’s work. The council is seeking public input on the draft taxonomy and other work areas until 24 November.


Hollywood actors and studios reach agreement over use of AI 

Hollywood actors have finally reached a (tentative) deal with studios, bringing an end to a months-old strike. One of the disagreements was on the use of AI: Under the new deal, producers will be required to get consent and compensate actors for the creation and use of digital replicas of actors, whether created on set or licensed for use. 

The film and television industry faced significant disruptions due to a strike that began in May. The underlying rationale was this: While it’s impossible to halt the progress of AI, actors and writers could fight for more equitable compensation and fairer terms. Hollywood’s film and television writers reached an agreement in October, but negotiations between studios and actors were at an impasse until last week’s deal.

Why is it relevant? First, it’s a prime example of how AI has been disrupting creative industries and drawing concerns from actors and writers, despite earlier scepticism. Second, as The Economist thinks, AI could make a handful of actors omnipresent, and hence, eventually boring for audiences. But we think fans just want a good storyline, regardless of whether the well-loved artist is merely a product of AI.


OpenAI’s ChatGPT hit by DDoS attack

OpenAI was hit by a cyberattack last week, resulting in a major outage to its ChatGPT and API. The attack was suspected to be a distributed denial of service (DDoS) attack, which is meant to disrupt access to an online service by flooding it with too much traffic. When the outage first happened, OpenAI reported that the problem was identified, and a fix was deployed. But the outage continued the next day, with the company confirming that it was ‘dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack’.

Responsible. Anonymous Sudan claimed responsibility for the attack, which the group said was in response to OpenAI’s collaboration with Israel and the OpenAI CEO’s willingness to invest more in the country.

Screenshot of a message from Anonymous Sudan entitled ‘Some reasons why we targeted OpenAI and ChatGPT’ lists four reasons: (1) OpenAI’s cooperation with the state of Israel, (2) use of AI for weapons and oppression, (3) it is an American company, and 4) it has a bias toward Israel (summary of the list).

Was this newsletter forwarded to you, and you’d like to see more?


// COMPETITION //

G7 ready to tackle AI-driven competition risks; more discussion on genAI needed

Competition authorities from G7 countries believe they already have the legal authority to address AI-driven competitive harm, which power could be further complemented by AI-specific policies, according to a communiqué published at the end of last week’s summit in Tokyo.

When it comes to emerging technologies such as generative AI, however, the G7 competition authorities say that ‘further discussions among us are needed on competition and contestability issues raised by those technologies and how current and new tools can address these adequately.’

Why is it relevant? Unlike other areas of AI governance, competition issues are not a matter of which new laws to enact, but rather how to interpret existing legal frameworks. How could this be done? Competition authorities have suggested that government departments, authorities, and regulators should (a) give proper consideration to the role of effective competition alongside other issues and (b) collaborate closely with each other to tackle systemic problems consistently.


// COPYRIGHT //

OpenAI launches Copyright Shield to cover customers’ legal fees for copyright infringement claims

Sam Altman, the CEO of OpenAI, has announced that the company will cover the legal expenses of business customers faced with copyright infringement claims stemming from using OpenAI’s AI technology. The decision responds to the escalating concern that industry-wide AI technology is being trained on protected content without the authors’ consent. 

This initiative, called Copyright Shield, was announced together with a host of other improvements to ChatGPT. Here’s the announcement: ‘OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield – we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.’

Why is it relevant? The offer of covering legal costs has become a trend, after Microsoft, in September, announced legal protection to users of its Copilot AI services faced with copyright infringement lawsuits, with Google following suit a month later by adding a second layer of indemnity to also cover AI-generated output. Details of how these services will be implemented are not yet entirely clear.


Meta info sheet states: Want to subscribe or continue using our Products for free with ads?Laws are changing in your region, so we're introducing a new choice about how we use your info for ads. You'll learn more about what each option means for you before you confirm your choice. Your choice will apply to the accounts in this Accounts CentreSubscribe to use without adsSubscribe to use your Instagram account without ads, starting at €12.99/month (inclusive of applicable taxes). Your info won't be used for ads.Use for free with ads Discover products and brands through personalised ads, while using your Instagram account for free. Your info will be used for ads. To use our Products for free with ads, agree to Meta using your info for the followingContinue to use your information from your accounts in this Accounts Centre for adsContinue to use cookies on our Products to personalise your ads and measure how theyperformHelpful info Your experience will stay the sameYou can change your choice or adjust your settings at any time to make sure that your ad experience is right for you.You can add or remove accounts at any time in Settings.We are committed to your privacy and keeping your information secure.We're updating our Terms and Privacy Policy to reflect these changes, including how we use your information for ads. By continuing to use our Products, you agree to the updated terms.

// PRIVACY //

Meta tells Europeans: Pay or Okay

Meta has rolled out a new policy for European users: Allow Facebook and Instagram to show personalised ads based on user data, or pay a subscription fee to remove ads. But there’s a catch – even if subscribers sign up to remove ads, the company will still gather their data – it just won’t use that data to show them ads. Privacy experts have seen this coming. A legal fight is definitely on the horizon.


// TAXATION //

Apple suffers setback over sweetheart tax case involving Ireland

The Apple-Ireland state aid case, which has been ongoing for almost a decade, is set to be decided by the EU’s Court of Justice, and things don’t look too good for Apple. The current chapter of the case involves a decision by the European Commission, which found that Apple owed Ireland EUR 13 billion (USD 13.8 billion) in unpaid taxes over an alleged tax arrangement granted to Apple, which amounted to illegal state aid. In 2020, the General Court annulled that decision, and the European Commission appealed.

Last week, the Court of Justice’s advocate general said the General Court made legal errors, and the annulment should be set aside. Advocate General Giovanni Pitruzzella advises the court to refer the case back to the lower court for a new decision.

Why is it relevant? First, the new opinion confirms the initial reaction of the European Commission, which at the time had said that the General Court made legal errors. Second, although the advocate general’s opinion is non-binding, it is usually given considerable weight by the court. 

Case details: Commission v Ireland and Others, C-465/20 P


The week ahead (13–20 November)

13–16 November: Cape Town, South Africa, will host the Africa Tech Festival, a four-day event that is expected to bring together around 12,000 participants from the policy and technology sectors. There are 3 tracks: AfricaCom is dedicated to telecoms, connectivity, and digital infrastructure; AfricaTech explores innovative and disruptive technologies; AfricaIgnite is dedicated to entrepreneurs.

15 November: The much-anticipated meeting between US President Joe Biden and Chinese President Xi Jinping will take place on the sidelines of the Asia-Pacific Economic Cooperation (APEC) leaders’ meeting in San Francisco. Both sides will be looking for a way to smooth relations, not least on technology issues.

20 November–15 December: The ITU’s World Radiocommunication Conference, taking place in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.


#ReadingCorner
Cover of a news magazine

The scourge of disinformation and hate speech during elections

There is no doubt that the use of social media as a daily source of information has grown a lot over the past 15 years. But did you know that it has now surpassed print media, radio, and TV? This leaves citizens particularly exposed to disinformation and hate speech, which are highly prevalent on social media. The Ipsos UNESCO survey on the impact of online disinformation and hate speech sheds light on the growing problem, especially during elections.


Screenshot of a Telegeography submarine cable map

One world, two networks? Not yet…

One of the biggest fears among experts is that the tensions between the USA and China could fragment the internet. Telegeography research director Alan Mauldin assesses the impact on the submarine cable industry. If you’re into slide decks, download Mauldin’s presentation.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation