Home | Newsletters & Shorts | DW Weekly #126 – 4 September 2023

DW Weekly #126 – 4 September 2023

 Text, Paper, Page

Dear all,

We’re starting September with a heightened sense of urgency around AI rules. In focus right now: how to make up for infringements suffered by copyright holders, and how to tackle content that’s been both human-authored and AI-generated (heads-up: We’re discussing this in more depth in our monthly issue, out this week). Meanwhile, cybercrime convention negotiations concluded in New York last week. There’s been limited headway, to no one’s surprise.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

UN’s cybercrime treaty: Limited headway as sixth round of negotiations concludes in New York

The sixth round of UN negotiations on a new cybercrime treaty (technically a convention on countering the use of ICTs for criminal purposes) concluded in New York last week (the report is in two parts: first, and second). The discussions have been captured in a longer, predominantly red-tracked, draft document

And on it goes. The outcome is similar to previous rounds. Lots of proposed additions and deletions to the draft text, with limited headway in resolving the primary disagreements. 

Disagreements persist. One of them relates to the scope of the convention and the related definitions. Is the convention trying to address core cybercrime offences (backed by the USA, the EU etc.), or the broader use of ICTs for criminal purposes (backed by Russia, China, etc.)? The other relates to the lack of human rights safeguards which could expand government surveillance powers and criminalise and expand cross-border government access to personal data.

Drawing ire. Human rights organisations have repeatedly decried the apparent lack of human rights safeguards. The current wording goes far beyond tackling cybercrime, they said in a press conference. As it stands, the draft doesn’t address issues such as overreaching power or judicial oversight.

Meanwhile, Microsoft has put forward a set of recommendations that could mitigate some of these same concerns. For instance, it has suggested that the definition of cybercrime shouldn’t be expanded in a way that it could encompass online content. And that the convention should concern itself only with acts involving criminal intent (to avoid criminalising the work of ethical hackers and cybersecurity researchers).

Why is it relevant? The process, kicked off by Russia a few years ago, is seeing countries like the USA, the EU, China, and Russia working alongside each other on how to solve issues of transnational internet crimes. It’s not a common occurrence, given all that’s currently going on. These negotiations will (or at least should) result in an international cybercrime treaty backed by the UN, the first of its kind. 

What happens next? There’s still time for the US government’s optimism (arguably less so in Russia’s opinion) to become reality. After all, according to diplomatic sources (the source remains unknown), ‘anything capable of getting a vote at the General Assembly next year would be seen as a win’. Informal consultation groups have until mid-Oct to send their texts to the chair, after which a revised draft convention will be circulated by the end of November. 

Keep track: Our UN Cybercrime Ad Hoc Committee page is tracking the developments.


Digital policy roundup (28 August–4 September)
// AI GOVERNANCE //

US Copyright Office eyeing new copyright rules for AI

The US Copyright Office is seeking public comment on issues that it’s facing since the stellar rise in popularity of generative AI tools. There are four challenges:

  • The first relates to the use of copyrighted works to train AI models: What permission and/or compensation for copyright owners is or should be required when their works are included? 
  • The second concerns the copyrightability of material generated using AI systems: Where and how to draw the line between human creation and AI-generated content. 
  • The third is about liability for infringement, when for instance, content generated using AI systems is very similar to copyrighted content. In such cases, how liability should be apportioned between the user whose instructions prompted the output, and developers of the system that used copyrighted content to train it. 
  • The fourth relates to the treatment of generative AI outputs that imitate the persona or style of human artists: Although these personal attributes are not generally protected by copyright law, their impersonation might involve differing state rights and laws.

Why is it relevant? The copyright office held public listening sessions and webinars to gather information and then published a notice of inquiry. This is typically the last step before new measures or rules are proposed, so we might be looking at proposals for new legislation before the year’s end. 

In the meantime. In tackling one of the four challenges it seeks to regulate, the copyright office has adopted the approach that individuals who use AI technology in creating a work may claim copyright protection for their own contributions to that work, as long as the (human) author demonstrates significant creative control over the AI-generated components of the work.


More companies rushing to block OpenAI’s web crawler GPTBot

Dozens of large companies, including Amazon, The New York Times, CNN, and several European online news portals, have rushed to block GPTBot, OpenAI’s new web crawler that scraps data to be fed to its popular chatbot, ChatGPT.

The latest update by Originality.ai, a company that checks content to see if it’s AI-generated or plagiarised, reveals that 12% of the top 1,000 websites (as ranked by Google) are now blocking OpenAI’s crawler. 

Why is it relevant? First, authors are taking the advantage (if you can call it that) into their own hands with an ex-ante solution. Second, it makes one wonder whether crawlers could be modified to filter out specific content such as copyrighted material.


UK MPs urge government to introduce new AI rules

UK members of parliament are urging the government to pass new AI rules or risk being left behind. In its latest report, the Science, Innovation and Technology Committee calls for a new law to be introduced by November (in the upcoming King’s Speech). 

In July, British Prime Minister Rishi Sunak told Parliament that AI guardrails could be developed around existing laws, at least initially. But tackling the AI challenges outlined in its reports, the committee thinks rules are required to avoid situations similar to what occurred with data protection regulations where the UK was left playing catch-up.

Why is it relevant? The British MPs are feeling the pressure from the Brussels effect, where EU rules become a de facto standard, saying: ‘We see a danger that if the UK does not bring in any new statutory regulation for three years it risks the Government’s good intentions being left behind by other legislation.’ But the UK government may be trying to balance regulation with AI-friendlier measures (which is, by the way, what companies have just asked Australia to do). 


Polish data protection authority investigating ChatGPT privacy breaches

OpenAI, the company behind ChatGPT, is being investigated for GDPR breaches in Poland. The complaint was made by security and privacy researcher Lukasz Olejnik, who is arguing that OpenAI processed his data ‘unlawfully, unfairly, and in a non-transparent manner’.

Why is this relevant? This investigation adds to the many cases that OpenAI is currently dealing with on issues related to data protection and privacy breaches.


// CYBERCRIME //

Qakbot malware halted by mega operation; USD9 million in crypto seized

The Qakbot malware, which infected over 700,000 devices, has been disrupted by a mega operation led by the USA, and involving France, Germany, the Netherlands, the UK, Romania, and Latvia. Infamous cybercrime gangs are known to have used the malware, also known as Qbot. 

The FBI said the criminals extorted over USD58 million in ransom payments between October 2021 and April 2023 alone, from victims that included financial services and healthcare entities.

Malware removed. The best part, at least for infected computers, is that the FBI managed to remove the malware by redirecting Qakbot botnet traffic to servers controlled by law enforcement. From there, infected computers were instructed to download an uninstaller that would remove the malware and prevent the installation of any additional malware. 

In perspective. Security company Check Point reported in its mid-year report that Qakbot was the most prevalent malware globally.

Why is it relevant? According to the FBI, this was one of the largest-ever USA-led enforcement actions against a botnet. But just to be clear: No arrests were made, so an improved version of the malware could return in one form or another.


Meta disrupts largest known influence operation (and it’s linked to China)

Meta took down thousands of accounts and pages linked to Spamouflage, which it described as the largest known covert influence operation in the world. The company said it also managed to link the campaign to individuals associated with Chinese law enforcement. Active since 2018, Spamouflage has been used to disseminate positive content about China while criticising the USA and disparaging Western foreign policies

China’s foreign ministry said it was not aware of the findings, and added that individuals and institutions have often launched campaigns against China on social media platforms.

Why is it relevant? The magnitude of accounts and pages used as part of the operation set this operation apart. In spite of its enormous size, the campaign’s impact was quite poor, partly because the campaign used accounts formerly associated with unrelated purposes, resulting in irrelevant and incoherent messages – a classic tell-tale sign of inauthentic content.


Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

Microsoft unbundles Team from Office to appease EU’s antitrust concerns

Just a month after the European Commission opened an antitrust investigation into bundling tactics by Microsoft and limiting the interoperability of competing offerings, Microsoft announced it would unbundle its communication and work platform Teams from its computer software Office 365, starting in October. These changes apply to Microsoft’s users in the EU and Switzerland; there’s no change for customers elsewhere.

Why is this relevant? The European Commission has so far declined to comment. If we had to bet on its reaction, though, we’d say that while it’s pleased that Microsoft is cooperating with the investigation, it doesn’t solve the potentially antitrust behaviour that occurred during the height of COVID-19. 

A likely outcome? The commission will not want to give people the idea that it’s fine to engage in antitrust behaviour (assuming this is confirmed), benefit from it (Teams is now a firmly established software), and emerge unscathed (without any fine). Avoiding a fine is quite unlikely.


// ONLINE NEWS //

Canada wants Google and Meta to contribute at least CAD230 million to local media

The Canadian government wants tech giants Google and Meta to contribute a minimum of CAD230 million (EUR157 million) to support local media, according to draft regulations that will implement the recently enacted Online News Act. The draft rules are open for public consultation until 3 October, meaning they can still change.

The formula for contributions takes into account the global revenues of tech companies earning more than CAD1 billion annually (which currently includes Google and Meta) and Canada’s share of the global GDP. Companies that fail to meet this threshold through voluntary agreements would be required to engage in negotiating fair compensation with local media as per the new law.

Why is it relevant? As you’d expect, Meta was far from thrilled with these developments (Google was still evaluating the rules) and said it would continue to block news content (started as a reaction to the Online News Act). On its part, the government boycotted Meta by pulling $10 million in advertising from its platforms – triggering other Canadian news and telecom companies to do the same. Surprisingly, however, if Canadian users’ time spent on Facebook is anything to go by, users have been quite indifferent to Meta’s strong arm tactics since the ban. This doesn’t give Meta much reason to think about changing its mind, does it?


The week ahead (4–11 September)

1–4 September: The self-organised privacy and digital rights conference Freedom Not Fear in Brussels ends today.

5 September: The Tallinn Digital Summit will tackle democracy and technology, and how to chart a course for a more resilient, responsive and open future.

PLUS: What’s ahead on the AI front

9–10 September: In an apparent change of direction in India’s approach to AI governance, we can now expect AI to be on the agenda at this coming weekend’s G20 Summit.

13 September: US Senator Chuck Schumer will kickstart his series of AI-forum meetings in Washington with prominent figures from the technology industry and lawmakers. Their focus? Delving into the implications of AI and its future regulation. The meetings are closed-door sessions, alas.

8–12 October: AI discussions are likely to be a primary focus during this year’s Internet Governance Forum in Japan.

1–2 November: The UK’s AI Safety Summit, scheduled to take place in Bletchley Park, Milton Keynes, is expected to build consensus on international measures to tackle AI risks.

Expect also…

  • Freshly minted guidelines for AI companies, developed by Japan (this year’s G7 chair), which are expected to be discussed by the G7 later this year.
  • The launch of the UN Secretary-General’s High-Level Advisory Group on AI, which is expected to start its work by the end of the year.

#ReadingCorner

 Blonde, Hair, Person, Adult, Male, Man, Head, Face, Clothing, Formal Wear, Suit, Photography, Portrait
DW Weekly #126 – 4 September 2023 6

Microsoft’s Brad Smith calls for human control over AI

Microsoft’s president and vice-chairman, Brad Smith, has emphasised the need for humans to retain control of AI technology, in an interview with CNBC. Concerned about the potential weaponisation of AI, he’s urging new rules that ensure human control, especially in critical infrastructure and military applications. As for AI’s impact on jobs, he thinks AI’s augmenting human abilities, not displacing people. Read or watch.


 Leisure Activities, Person, Sport, Swimming, Water, Water Sports, Advertisement, Poster, Outdoors, Nature
DW Weekly #126 – 4 September 2023 7

A deep dive into the undersea cables ecosystem 

Undersea cables often fall into the out-of-sight, out-of-mind category, but they play a critical role in carrying over 97% of internet traffic. Have you ever wondered how subsea cables are regulated or what causes most cable cuts? (If not, you should). The latest report from the EU Agency for Cybersecurity (ENISA) delves into the subsea cable ecosystem and highlights the primary security challenges it faces. Worth a read.


FWAzpGt5 steph
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation