Home | Newsletters & Shorts | DW Weekly #112 – 22 May 2023

DW Weekly #112 – 22 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
DW Weekly #112 – 22 May 2023 7

Dear readers,

The search for ways to govern AI reached the US Senate Judiciary halls last week, with a hearing involving OpenAI’s Sam Altman, among others. The G7 made negligible progress on tackling AI issues, but significant progress on operationalising the Data Free Flow with Trust approach.   

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

US Senate hearing: 10 key messages from OpenAI’s CEO Sam Altman 

If OpenAI Sam Altman’s hearing before the US Congress last week reminded you of Mark Zuckerberg’s testimony a few years ago, you’re not alone. Both CEOs testified before the Senate Judiciary Committee (albeit different subcommittees), and both called for regulation of their respective industries.

However, there’s a significant distinction between the two. Zuckerberg was asked to testify in 2018 primarily due to concerns surrounding data privacy and the Cambridge Analytica scandal. In Altman’s case, there was no scandal: lawmakers are trying to figure out how to navigate the uncharted territory of AI. And with Altman’s hearing coming several years later, lawmakers now have more familiarity with policies and approaches that proved effective, and those that failed. 

Here are ten key messages Altman delivered to lawmakers during last week’s subcommittee hearing.

1. We need regulations that employ a capabilities-based approach…

Amid discussions around the EU’s forthcoming AI Act, which will take on a risk-based approach (the higher the risk, the stricter the rules), Altman argued that US lawmakers should favour a power- or capabilities-based strategy (the stronger or more powerful the algorithm, the stricter the rules). 

He suggested that lawmakers consider ‘a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities’.

What would these capabilities look like? According to Altman, the benchmark would be determined by what the models can accomplish. So presumably, one would take AI’s abilities at the time of regulation as a starting point, and gradually increase the benchmarks as AI improves its abilities.

2. Regulations that will tackle more powerful models…

We know it takes time for legislation to be developed. But let’s say lawmakers were to introduce new legislation tomorrow: Altman thinks that the starting point should be more powerful models, rather than what exists right now.

‘Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond, helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability… We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.’

3. Regulations that acknowledge that users are just as responsible…

Altman did not mince words: ‘Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well.’

Hence the need for a new liability framework, Altman restated.

4. …And regulations that place the burden on larger companies.

Altman notes that regulation comes at the risk of slowing down the American industry ‘in such a way that China or somebody else makes faster progress.’ 

So how should lawmakers deal with this risk? Altman suggests that the regulatory pressure should be on the larger companies that have the resources to handle the burden, unlike smaller companies. ‘We don’t wanna slow down smaller startups. We don’t wanna slow down open source efforts.’ 

5. Independent scorecards are a great idea, as long as they recognise that its ‘early stages’

When a Senator asked Altman whether there should be independent testing labs to provide scorecards that indicate ‘whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be, because it could result in garbage going out’, Altman’s positive response was followed by a caveat.

‘These models are getting more accurate over time… (but) this technology is in its early stages. It definitely still makes mistakes… Users are pretty sophisticated and understand where the mistakes are… that they need to be responsible for verifying what the models say, that they go off and check it.’

The question is, when will (it be convenient to say that) the technology outgrew its early stages? 

6. Labels are another great idea for telling fact from fiction

Altman points out that to assist people understand what they’re reading and viewing, it helps if there are labels to tell people what they’re looking at. ‘People need to know if they’re talking to an AI, if, if content that they’re looking at might be generated or might not’.

The generated content will still be out there, but at least, creators of the generated content can be transparent with their viewers, and viewers can make informed choices, he said.

7. It takes three to tango: the combined effort of government, the private sector, and users to tackle AI governance

Neither regulation nor scorecards or labels will be sufficient on their own. Altman referred to the birth of photoshopped images, highlighting how people rapidly learned to understand that images might be photoshopped and the tool misused.

The same applies to AI: ‘It’s going to require a combination of companies doing the right thing, regulation and public education.’

8. Generative AI won’t be the downfall of news organisations

The reason is simple, according to Altman: ‘The current version of GPT-4 ended training in 2021. It’s not a good way to find recent news.’

He acknowledges that other generative tools built on top of ChatGPT can pose a risk for news organisations (presumably referring to the ongoing battle in Canada, and previously in Australia, on media bargaining), but also thinks that it was the internet that let news organisations down.

9. AI won’t be the downfall of jobs, either

Altman reassured lawmakers that ‘GPT-4 and other systems like it are good at doing tasks, not jobs’. We reckon jobs are made up of tasks, and that’s why Altman might have chosen different words later in his testimony.

‘GPT-4 will entirely automate away some jobs, and it will create new ones that we believe will be much better… This has been continually happening… So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by the government to figure out how we want to mitigate that.’

10. Stay calm and carry on: GPT is ‘a tool, not a creature’

We had little doubt about that, but what Altman said next might have been aimed at reassuring those who said they’re worried about humanity’s future: GPT-4 is a tool ‘that people have a great deal of control over and how they use it.’ 
The question for Altman is: how far are we from losing control over AI? It’s a question no one asked him.


Digital policy roundup (15–22 May)
// AI & DATA //

G7 launches Hiroshima AI dialogue process

The G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). Sunday’s announcement, which came at the end of the three-day summit in Hiroshima, Japan, provides the details of what the G7 digital ministers agreed to in April. The working group tasked with the Hiroshima AI process is expected to start its work this year. 

The G7 also agreed to support the development of AI standards. (Refresher: Here’s the G7 digital ministers’ Action Plan on AI interoperability.)  

Why is this relevant? On the home front, with the exception of a few legislative hotspots working on AI rules, most governments are worrying about generative AI (including ChatGPT) but are not yet ready to take legislative action. On the global front, while the G7’s Hiroshima AI process is at the forefront of tackling generative AI, the group acknowledges that there’s a serious discrepancy among the G7 member states’ approaches to policy. The challenges are different, but the results are similar. 

G7 greenlights plans for Data Free Flow with Trust concept

The G7 had firmer plans in place for data flows. As anticipated, the G7 endorsed the plan for operationalising the Data Free Flow with Trust (DFFT) concept, outlined last month by the G7 digital ministers.

The leaders’ joint statement draws attention to the difference between unjustifiable data localisation regulations and those that serve the public interests of individual countries. The practical application of this disparate treatment remains uncertain; the new Institutional Arrangement for Partnership (IAP), which will be led by the OECD, has a lot of work ahead.

Why is this relevant? The IAP’s work won’t be easy. As the G7 digital ministers acknowledged, there are significant differences in how G7 states (read: the USA and EU countries) approach cross-border data flows. But as any good negotiator will say, identifying commonalities offers a solid foundation, so the G7 communique’s language (also found in previous G7 and G20 declarations) remains promising. Expect accelerated progress on this initiative in the months to come. 

 Accessories, Formal Wear, Tie, Face, Head, Person, Photography, Portrait, People, Adult, Male, Man, Clothing, Suit, Computer Hardware, Electronics, Hardware, Monitor, Screen, Crowd, Necktie, Eric Schmidt
DW Weekly #112 – 22 May 2023 8

Ex-Google CEO says AI regulation should be left to companies 

Former Google CEO Eric Schmidt believes that governments should leave AI regulation to companies since no one outside the tech industry has the necessary expertise. Watch the report or read the transcript (excerpt):

NBC: You’ve described the need for guardrails and what I’ve heard from you is, we should not put restrictive regulations from the outside, certainly from policymakers who don’t understand it. I have to say I don’t hear a lot of guardrails around the industry in that. it really just as I’m understanding it from you comes down to what the industry decides for itself.

Eric Schmidt: When this technology becomes more broadly available, which it will and very quickly, the problem is going to be much worse. I would much rather have the current companies define reasonable boundaries. 

NBC: It shouldn’t be a regulatory framework. It maybe shouldn’t even be a sort of a democratic vote. It should be the expertise within the industry that helps to sort that out. 

Eric Schmidt: The industry will first do that because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right, but the industry can roughly get it right and then the government can put a regulatory structure around it.


// SECTION 230 //

Section 230 unaffected by two US Supreme Court judgements

As anticipated, the US Supreme Court left Section 230 untouched in two judgements involving families of people killed by Islamist extremists overseas. The families tried to hold social media platforms liable for allowing extremists on their platforms or recommending such content to users, arguing that Section 230 (a rule that protects internet platforms from liability for third-party content posted on the platforms) should not shield the platforms.

What the Twitter vs Taamneh (21-1496) judgement says: US Supreme Court justices agreed unanimously to reverse a lower court’s judgement against Twitter, in a case initiated by the US relatives of Nawras Alassaf, who was killed in Istanbul in 2017. The Supreme Court struck down claims that Twitter aided extremist groups: Twitter’s ‘algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users, therefore, does not convert defendants’ passive assistance into active abetting.’

What the Gonzalez vs Google (21-1333) judgement says: In its judgement in a parallel case, the US Supreme Court sent back the lawsuit brought by the family of Nohemi Gonzalez, who was fatally shot in Paris in 2015, to the lower court. The Supreme Court declined to even address the scope of Section 230, as the family’s claims were likely to fail in the light of the Twitter case.


// TIKTOK //

EU unfazed by TikTok’s cultural diplomacy at Cannes

TikTok’s partnership with the Festival de Cannes was the talk of the French town last week. But TikTok’s cultural diplomacy efforts, which appeared at Cannes for the second year, failed to impress the European Commission.

Referring to TikTok’s appearance at Cannes, in an interview on France TV’s Télématin (jump to 1’17’) European Commissioner Thierry Breton said that the company ‘still (has) a lot of room for improvement’, especially when it comes to safeguarding children’s data. Breton also confirmed that he was in talks with TikTok’s CEO recently, presumably about the Digital Service Act commitments, which very large platforms need to deliver on by 25 August.


The week ahead (22–28 May)

23–25 May: The 3rd edition of the Quantum Matter International Conference – QUANTUMatter 2023 – takes place in Madrid, Spain. Experts will tackle the latest in quantum technologies, emerging quantum materials and novel generations of quantum communication protocols, quantum sensing, and quantum simulations.

23–26 May: The Open-Ended Working Group (OEWG) will hold informal intersessional meetings that will comprise the Chair’s informal roundtable discussion on capacity-building and discussions on topics under the OEWG’s mandate.

24 May: If you haven’t yet proposed a session for this year’s Internet Governance Forum (IGF) (to be held on 8–12 October in Kyoto, Japan), you still have a couple of days left until the extended deadline.

24–25 May: The 69th meeting of TF-CSIRT, the task force that coordinates Computer Security and Incident Response Teams in Europe, takes place in Bucharest, Romania.

24–26 May: The 16th international Computers, Privacy, and Data Protection (CPDP) conference, taking place in Brussels and online, will deal with ‘Ideas that drive our digital world’, mostly related to AI governance, and, well, data protection.

25 May: There will be lots to talk about during the Global Digital Compact’s next thematic deep dive, on AI and emerging technologies (UTC’s afternoon) and digital trust and security (UTC’s evening). Register to participate.

For more events, bookmark the DW observatory’s calendar of global policy events.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?