DW Weekly #145 – 2 February 2024

 Page, Text

Dear readers,

OpenAI’s vs Italy’s Garante saga continues, as the data protection authority notifies OpenAI that ChatGPT is violating data protection rules. China approved 40 AI models in the last six months to rival the USA. The White House outlines key AI actions post-Biden’s order, and the UK plans tests for new AI laws. Elon Musk’s Neuralink implants a brain chip, pushing the boundaries of tech-human integration.

Germany and Namibia, serving as co-facilitators of the Summit of the Future, unveiled the zero draft of the Pact for the Future. Cybersecurity highlights of the week include the announcement of an OEWG roundtable on capacity building and the FBI thwarting Chinese hacks on critical infrastructure in the USA. 

Content concerns rise with Taylor Swift’s deepfakes. India joined the fight against deepfakes, emphasising the need for accountability from platforms hosting such content. Big Tech CEOs testified at a US Senate hearing over accusations of failing to take effective measures to protect children from harmful and CSAM content.

Amazon and Roomba iRobot cancelled a $1.4 billion deal due to EU antitrust concerns.

Let’s get started. 

Andrijana and the Digital Watch team


Highlights from the week of 26 January-2 February 2024

OpenAi and Microsoft
OpenAI’s ChatGPT faces scrutiny from Italian privacy watchdog
According to the authority, there are indications of violations of data privacy law. OpenAI has been granted a 30-day period to present its defence arguments in response to the allegations. Read more.
OpenAi and Microsoft
OpenAI’s ChatGPT faces scrutiny from Italian privacy watchdog
According to the authority, there are indications of violations of data privacy law. OpenAI has been granted a 30-day period to present its defence arguments in response to the allegations. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China approves over 40 AI models to narrow US development gap
Regulators granted approvals for 14 large language models (LLMs) last week after granting approvals in three precedent batches the previous year. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China approves over 40 AI models to narrow US development gap
Regulators granted approvals for 14 large language models (LLMs) last week after granting approvals in three precedent batches the previous year. Read more.
the united states capitol building in washington dc united states of america
White House releases fact sheet on key AI actions following Biden’s executive order
Three months after the release of President Biden’s highly anticipated executive order, the White House released a fact sheet with its key actions on AI. Read more.
the united states capitol building in washington dc united states of america
White House releases fact sheet on key AI actions following Biden’s executive order
Three months after the release of President Biden’s highly anticipated executive order, the White House released a fact sheet with its key actions on AI. Read more.
ai learning artificial intelligence concept 1
UK’s government to publish tests to determine new AI laws
The results of these tests could trigger legislative action to ensure the UK can keep pace with the risks of AI. Read more.
ai learning artificial intelligence concept 1
UK’s government to publish tests to determine new AI laws
The results of these tests could trigger legislative action to ensure the UK can keep pace with the risks of AI. Read more.
3d x ray human brain with computer chip circuit 1
Elon Musk’s Neuralink implants brain chip in human
Neuralink’s founder Elon Musk announced that the first human patient implanted with the brain-chip startup’s device on Sunday is recovering well, with promising initial results indicating neuron spike detection. Read more.
3d x ray human brain with computer chip circuit 1
Elon Musk’s Neuralink implants brain chip in human
Neuralink’s founder Elon Musk announced that the first human patient implanted with the brain-chip startup’s device on Sunday is recovering well, with promising initial results indicating neuron spike detection. Read more.

SoF
Germany and Namibia, co-facilitators of the Summit of the Future, publish zero draft of the Pact for the Future
The draft includes several provisions on science, technology and innovation, as well as new technologies and peace and security. Read more.
SoF
Germany and Namibia, co-facilitators of the Summit of the Future, publish zero draft of the Pact for the Future
The draft includes several provisions on science, technology and innovation, as well as new technologies and peace and security. Read more.
Screen Shot 2024 02 02 at 11.56.19
OECD releases the 2023 digital government index
The top 10 performers in the index include Korea, Denmark, the UK, Norway, Australia, Estonia, Colombia, Ireland, France, and Canada. Read more.
Screen Shot 2024 02 02 at 11.56.19
OECD releases the 2023 digital government index
The top 10 performers in the index include Korea, Denmark, the UK, Norway, Australia, Estonia, Colombia, Ireland, France, and Canada. Read more.

closeup shot of the waving flag of the united nations with interesting
UN OEWG Chair invites ICT ministers to Global roundtable on ICT security capacity building
This is the first time the UN is convening a dedicated global event on ICT security capacity building. Read more.
closeup shot of the waving flag of the united nations with interesting
UN OEWG Chair invites ICT ministers to Global roundtable on ICT security capacity building
This is the first time the UN is convening a dedicated global event on ICT security capacity building. Read more.
american and chinese flags diplomatic crisis concept
FBI and DoJ use court order to thwart Chinese hacking of critical infrastructure
This move is part of a broader government initiative to counter the persistent threat Chinese hackers pose. Read more.
american and chinese flags diplomatic crisis concept
FBI and DoJ use court order to thwart Chinese hacking of critical infrastructure
This move is part of a broader government initiative to counter the persistent threat Chinese hackers pose. Read more.
BIGTECH Hearing
Big Tech CEOs testify at US Senate hearing over online child sexual abuse 
CEOs of Meta (Mark Zuckerberg), X (Linda Yaccarino), TikTok (Shou Chew), Snap (Evan Spiegel) and Discord (Jason Citron), testified before Congress over accusations of failing to protect children from sexual exploitation online. Read more.
BIGTECH Hearing
Big Tech CEOs testify at US Senate hearing over online child sexual abuse 
CEOs of Meta (Mark Zuckerberg), X (Linda Yaccarino), TikTok (Shou Chew), Snap (Evan Spiegel) and Discord (Jason Citron), testified before Congress over accusations of failing to protect children from sexual exploitation online. Read more.

computer keyboard with red deepfake button key deepfake dangers online
Taylor Swift’s deepfakes spark calls for criminalisation of deepfake pornography
The rapid online spread of deepfake pornographic images of Taylor Swift has generated renewed calls to criminalise the practice. Read more.
computer keyboard with red deepfake button key deepfake dangers online
Taylor Swift’s deepfakes spark calls for criminalisation of deepfake pornography
The rapid online spread of deepfake pornographic images of Taylor Swift has generated renewed calls to criminalise the practice. Read more.
shutterstock 2289524003
India takes stand against deepfakes, platforms to be accountable
India is proactive in recognising and addressing the threat of deepfakes due to its large online population base, with approximately 870 million internet users. Read more.
shutterstock 2289524003
India takes stand against deepfakes, platforms to be accountable
India is proactive in recognising and addressing the threat of deepfakes due to its large online population base, with approximately 870 million internet users. Read more.
2022 concept business people show future big data technology metaverse through networking
Big Tech can earn enough in a week to cover 2023 fines, highlighting the need for stronger regulation
Proton, a Swiss-based privacy company, conducted a study indicating that if each company paid their fines sequentially, it would take a little over a week, with Meta requiring the longest—five days and 13 hours. Read more.
2022 concept business people show future big data technology metaverse through networking
Big Tech can earn enough in a week to cover 2023 fines, highlighting the need for stronger regulation
Proton, a Swiss-based privacy company, conducted a study indicating that if each company paid their fines sequentially, it would take a little over a week, with Meta requiring the longest—five days and 13 hours. Read more.

rymh7ezpqrs
Amazon and Roomba iRobot maker cancel $1.4 billion deal
Amazon and Roomba iRobot terminated their acquisition due to EU antitrust concerns. FTC also investigated Amazon for market dominance and antitrust violations. Read more.
rymh7ezpqrs
Amazon and Roomba iRobot maker cancel $1.4 billion deal
Amazon and Roomba iRobot terminated their acquisition due to EU antitrust concerns. FTC also investigated Amazon for market dominance and antitrust violations. Read more.


#ICYMI
 Blackboard, Light

As the curtains fall, it is time for reflections. The 9th Geneva Engage Awards on 1 February 2024 acknowledged and celebrated the efforts of International Geneva actors in digital outreach and creating meaningful engagement with the rest of the world. Through 3 pre-ceremony interactive workshops, communication experts from different stakeholder groups and Diplo’s researchers deliberated about good website information organisation practices, social media management, and the future of event and conference hosting. 

The evening then culminated in the award ceremony, with the International Committee of the Red Cross, the World Heart Federation, and the Permanent Mission of Pakistan to the UN in Geneva taking the crown in the main categories. The Effective and Innovative Events Award went to the United Nations Conference on Trade and Development eWeek 2023 for its adoption of AI reporting system provided by DiploFoundation. The Accessibility Award went to the UK Mission to the WTO, UN and Other International Organisations (Geneva)

Congratulations to all the winners! To watch the recording or to learn about our methodology, please visit the dedicated website


#ReadingCorner
 Person, Security
Campaigns 34

Web accessibility is often misperceived as a cost rather than an investment, a challenge rather than an opportunity, and a legal hurdle rather than compliance with obligations that benefit all users. In her blog post, Diplo’s Senior Policy Editor Virginia (Ginger) Paque argues that we all need accessibility, to everything.

 Wristwatch, Advertisement, Body Part, Hand, Person, Arm, Baby, Accessories, Formal Wear, Tie

We watched the new Netflix documentary ‘Bitconned’ on cryptocurrency industry fraud. Diplo’s resident expert on crypto Arvin Kamberi analyses what were the ingredients to create such a perfect storm for victims.


Follow the Ad Hoc Committee on Cybercrime with GIP reports
 City, Architecture, Building, Office Building, Convention Center, Urban, Flag

Will the UN Ad Hoc Committee on Cybercrime stick to its plan and have the first global treaty on cybercrime? Stay tuned for real-time updates and just-in-time reporting facilitated by The Digital Watch Observatory’s AI-driven App!

DW Weekly #144 – 26 January 2024

 Page, Text

Dear readers,

In the realm of AI, the leaked consolidated text of the EU AI Act has stirred anticipation, with the European Commission set to establish the European AI Office for enforcement. Simultaneously, the US Federal Trade Commission’s inquiry into tech giants’ AI investments reflects growing concerns, while the launch of the National AI Research Resource demonstrates the nation’s commitment to advancing AI.

Cybersecurity concerns escalate with a massive database leak of over 26 billion records, and the UK NCSC warns of AI amplifying cyberattacks, projecting a $12 trillion cost by 2025.

E-waste is addressed through a new EU methodology, and internet access proves pivotal for development in Nigeria and Tanzania. A Spanish court recognised the hidden trauma of content moderators, shedding light on the human toll. Economically, Apple permits downloads outside the EU App Store with new fees, reshaping digital market dynamics.

Let’s get started.

The Digital Watch team


Highlights from the week of 19-26 January 2024

european parliament interior
EU AI Act consolidated text leaked online
On 22 January 2024, two unofficial versions of the consolidated text on the proposed EU Artificial Intelligence Act were disclosed online. Read more.
european parliament interior
EU AI Act consolidated text leaked online
On 22 January 2024, two unofficial versions of the consolidated text on the proposed EU Artificial Intelligence Act were disclosed online. Read more.
eu flags in front of european commission
European Commission to establish European AI Office for EU AI Act enforcement
The European Commission is set to launch the European Artificial Intelligence Office. The AI office will play a key role in developing and regulating AI in the EU. Read more.
eu flags in front of european commission
European Commission to establish European AI Office for EU AI Act enforcement
The European Commission is set to launch the European Artificial Intelligence Office. The AI office will play a key role in developing and regulating AI in the EU. Read more.
law and justice in united states of america
US Federal Trade Commission launches inquiry into tech giants’ AI investments
The inquiry aims to understand how these investments may alter the competitive landscape and whether they enable dominant firms to exert undue influence or gain unfair competitive advantage. Read more.
law and justice in united states of america
US Federal Trade Commission launches inquiry into tech giants’ AI investments
The inquiry aims to understand how these investments may alter the competitive landscape and whether they enable dominant firms to exert undue influence or gain unfair competitive advantage. Read more.
glitched united states of america flag in silhouette of usa map on abstract digital code background 3d illustration concept for national cyber security awareness safe internet and fraud attacks stockpack istock scaled
US launches National AI Research Resource
The National AI Research Resource (NAIRR) has been established as a pilot program in the US with the initiative to democratise AI research and provide public access to resources for aspiring AI scientists and engineers. Read more.
glitched united states of america flag in silhouette of usa map on abstract digital code background 3d illustration concept for national cyber security awareness safe internet and fraud attacks stockpack istock scaled
US launches National AI Research Resource
The National AI Research Resource (NAIRR) has been established as a pilot program in the US with the initiative to democratise AI research and provide public access to resources for aspiring AI scientists and engineers. Read more.
businesswoman with metaverse word cyberspace
Meta joins the tech giants’ race for AGI
Meta CEO Mark Zuckerberg revealed that the company’s long-term vision is to develop AGI and make it open source to benefit all. Read more.
businesswoman with metaverse word cyberspace
Meta joins the tech giants’ race for AGI
Meta CEO Mark Zuckerberg revealed that the company’s long-term vision is to develop AGI and make it open source to benefit all. Read more.

Cyber spy
Supermassive database leak reveals over 26 billion data records
This is likely the biggest data leak to date. Read more.
ai artificial intelligence concept robot hands typing on lit keyboard
UK NCSC: AI will escalate the frequency and impact of cyberattacks
The NCSC reveals the current use of AI in malicious activities and projecting a substantial increase in the frequency and impact of cyberattacks, particularly ransomware, in the short term. Read more.
ai artificial intelligence concept robot hands typing on lit keyboard
UK NCSC: AI will escalate the frequency and impact of cyberattacks
The NCSC reveals the current use of AI in malicious activities and projecting a substantial increase in the frequency and impact of cyberattacks, particularly ransomware, in the short term. Read more.
stark warning to beware of scams and phishing attempts
Cybercrime will cost $12 trillion by 2025, researchers claim
The researchers are particularly concerned about the escalating threat landscape surrounding AI. Read more.
stark warning to beware of scams and phishing attempts
Cybercrime will cost $12 trillion by 2025, researchers claim
The researchers are particularly concerned about the escalating threat landscape surrounding AI. Read more.

e waste electronic computer circuit cpu chip mainboard core processor electronics device
New EU methodology developed to measure e-waste
The CBS method may become the standard and be applied to calculate e-waste quantities for all European countries and potentially in other contexts. Read more.
e waste electronic computer circuit cpu chip mainboard core processor electronics device
New EU methodology developed to measure e-waste
The CBS method may become the standard and be applied to calculate e-waste quantities for all European countries and potentially in other contexts. Read more.

african man wearing virtual reality headset
Internet access drives development in Nigeria and Tanzania
According to the World Bank, improved access to internet coverage in Nigeria and Tanzania has led to a 7% reduction in extreme poverty. Read more.
african man wearing virtual reality headset
Internet access drives development in Nigeria and Tanzania
According to the World Bank, improved access to internet coverage in Nigeria and Tanzania has led to a 7% reduction in extreme poverty. Read more.

woman interacting with cell phone opening meta app 1
Meta internal documents reveal 100.000 daily instances of sexual harassment of children online
Meta faces allegations of exposing around 100,000 children to daily online sexual harassment on Facebook and Instagram, with internal documents revealing incidents, employee concerns, and a lawsuit by the New Mexico attorney general accusing the company of enabling child predators on its platforms Read more.
woman interacting with cell phone opening meta app 1
Meta internal documents reveal 100.000 daily instances of sexual harassment of children online
Meta faces allegations of exposing around 100,000 children to daily online sexual harassment on Facebook and Instagram, with internal documents revealing incidents, employee concerns, and a lawsuit by the New Mexico attorney general accusing the company of enabling child predators on its platforms Read more.


#ReadingCorner
double exposure creative human brain microcircuit with hand writing notepad background with laptop future technology ai concept

The second part of our three-part series on AI’s influence on intellectual property delves into the ramifications for trade secrets and trademarks within the legal frameworks of the EU and the USA.

 Art, Graphics, Nature, Outdoors, Sky, Person, Advertisement, Text, Water
Campaigns 60

The GIP Digital Watch Observatory provided coverage of the publicly broadcasted World Economic Forum 2024 sessions, focusing on discussions related to AI and digital technologies. We distilled all discussions into a cohesive form, encapsulating the essence through the lens of 7 key questions.

DW Weekly #143 – 19 January 2024

 Page, Text

Dear readers,

This week at the annual meeting of the world’s richest and most powerful in Davos, Switzerland, one of the main themes is AI, with notable leaders engaging in discussions at the World Economic Forum. With businesses and governments rushing to grasp the tech, this year’s meetings will provide a key opportunity to get more globally aligned on adoption and safety. 

Some highlights: UN Secretary-General Antonio Guterres warned that big technology companies are recklessly pursuing profits from AI, and urgent action is needed to mitigate the risks from the rapidly growing sector. President of the European Commission Ursula von der Leyen called for European leadership in responsible AI adoption, focusing on leveraging talent, and industrial data, and providing startups access to supercomputers while making data available in all EU languages. Microsoft’s Satya Nadella urged attendees on a global regulatory approach to the technology but said he feels a ‘broad consensus’ is emerging on guardrails. OpenAI’s Sam Altman said it is essential to adapt an iterative deployment of AI systems, as this approach allows sufficient time for debate, regulation, and control over AI’s impact.

In other news, Australia plans to establish an advisory body for AI oversight and regulation, India enters the LLM race, and the IMF chief advocates for retraining amid AI-driven job changes. OpenAI addressed election misuse concerns, altering its policy. Baidu denied links to China’s military. China bypassed US restrictions on Nvidia chips, while South Korea boosted its industry with tax credits. ASEAN plans a region-wide digital economy agreement, and the UK regulator unveiled a new regime. Meta documents show 100,000 children daily facing harassment online. The French watchdog fined Yahoo! 10 million euros over cookie policy.


Highlights from the week of 15-19 January 2024

australia flag is depicted on the screen with the program code
Australia to establish an advisory body for AI oversight and regulation
Australia’s government has announced plans to establish an advisory body to address the risks posed by AI. The government also intend to introduce guidelines for technology companies to label and watermark content created by AI. Read more.
australia flag is depicted on the screen with the program code
Australia to establish an advisory body for AI oversight and regulation
Australia’s government has announced plans to establish an advisory body to address the risks posed by AI. The government also intend to introduce guidelines for technology companies to label and watermark content created by AI. Read more.
shutterstock 2291316119 scaled
India enters the LLM race with a Telugu model
Swecha, a movement committed to providing free, quality software across India, is on the verge of launching a chatbot created to tell Chandamama stories in Telugu language. Read more.
shutterstock 2291316119 scaled
India enters the LLM race with a Telugu model
Swecha, a movement committed to providing free, quality software across India, is on the verge of launching a chatbot created to tell Chandamama stories in Telugu language. Read more.
IMF7
IMF chief highlights the need for retraining and safety nets amid AI-driven job changes
AI could affect nearly 40% of jobs worldwide, leading to increased inequality, according to the International Monetary Fund. The IMF chief, Kristalina Georgieva, has called for governments to establish social safety nets and retraining programs to counter the negative impact of AI. Read more.
IMF7
IMF chief highlights the need for retraining and safety nets amid AI-driven job changes
AI could affect nearly 40% of jobs worldwide, leading to increased inequality, according to the International Monetary Fund. The IMF chief, Kristalina Georgieva, has called for governments to establish social safety nets and retraining programs to counter the negative impact of AI. Read more.
OpenAi
OpenAI addresses concerns over election misuse of AI
The company plans to make AI-generated images more obvious and is developing methods to identify modified content. Read more.
OpenAi
OpenAI addresses concerns over election misuse of AI
The company plans to make AI-generated images more obvious and is developing methods to identify modified content. Read more.
V 1 OpenAI
OpenAI alters usage policy, removes explicit ban on military use
The previous ban on ‘weapons development’ and ‘military and warfare’ applications has been replaced with a broader injunction not to ‘use our service to harm yourself or others.’ This change is part of a significant rewrite aimed at making the document ‘clearer’ and ‘more readable,’ according to OpenAI. Read more.
V 1 OpenAI
OpenAI alters usage policy, removes explicit ban on military use
The previous ban on ‘weapons development’ and ‘military and warfare’ applications has been replaced with a broader injunction not to ‘use our service to harm yourself or others.’ This change is part of a significant rewrite aimed at making the document ‘clearer’ and ‘more readable,’ according to OpenAI. Read more.
2b194604 a015 4f51 bd9b cf039f92e74d
Baidu denies links with Chinese military
The denial follows the citation of an academic paper by the South China Morning Post, indicating that the People’s Liberation Army cyberwarfare division had tested its AI system on Ernie and another AI chatbot. Read more.
2b194604 a015 4f51 bd9b cf039f92e74d
Baidu denies links with Chinese military
The denial follows the citation of an academic paper by the South China Morning Post, indicating that the People’s Liberation Army cyberwarfare division had tested its AI system on Ernie and another AI chatbot. Read more.

DALL%C2%B7E 2023 11 22 22.33.01 A photo realistic image representing a conceptual conflict in semiconductor technology between China and the United States. The image features a large
Chinese military manages to bypass US restrictions on Nvidia chips
Despite the US ban, China’s military and government have acquired Nvidia chips. The publicly available tender documents show that dozens of Chinese entities have purchased Nvidia’s A100 and H100 chips, which are widely used in AI applications. Read more.
DALL%C2%B7E 2023 11 22 22.33.01 A photo realistic image representing a conceptual conflict in semiconductor technology between China and the United States. The image features a large
Chinese military manages to bypass US restrictions on Nvidia chips
Despite the US ban, China’s military and government have acquired Nvidia chips. The publicly available tender documents show that dozens of Chinese entities have purchased Nvidia’s A100 and H100 chips, which are widely used in AI applications. Read more.
flag of south korea
South Korea to boost semiconductor industry with tax credit benefits
South Korean President Yoon Suk Yeol plans to extend tax credits on heavy investments in the domestic semiconductor industry to boost employment and attract more talent. The government aims to enhance the competitiveness of high-tech sectors, including chips, displays, and batteries. Read more.
flag of south korea
South Korea to boost semiconductor industry with tax credit benefits
South Korean President Yoon Suk Yeol plans to extend tax credits on heavy investments in the domestic semiconductor industry to boost employment and attract more talent. The government aims to enhance the competitiveness of high-tech sectors, including chips, displays, and batteries. Read more.

flags of southeast asia countries aec asean economic community
ASEAN set to introduce region-wide digital economy agreement
Despite ambitious aims for harmonisation, challenges loom due to socio-economic differences and diverse regulatory frameworks. Read more.
flags of southeast asia countries aec asean economic community
ASEAN set to introduce region-wide digital economy agreement
Despite ambitious aims for harmonisation, challenges loom due to socio-economic differences and diverse regulatory frameworks. Read more.
uk flag
UK competition regulator reveals plan to implement new digital markets regime
The new digital markets competition regime, is outlined in the UK’s Digital Markets, Competition and Consumers (DMCC) Bill. Read more.
uk flag
UK competition regulator reveals plan to implement new digital markets regime
The new digital markets competition regime, is outlined in the UK’s Digital Markets, Competition and Consumers (DMCC) Bill. Read more.

woman interacting with cell phone opening meta app 1
Meta internal documents reveal 100.000 daily instances of sexual harassment of children online
Meta faces allegations of exposing around 100,000 children to daily online sexual harassment on Facebook and Instagram, with internal documents revealing incidents, employee concerns, and a lawsuit by the New Mexico attorney general accusing the company of enabling child predators on its platforms Read more.
woman interacting with cell phone opening meta app 1
Meta internal documents reveal 100.000 daily instances of sexual harassment of children online
Meta faces allegations of exposing around 100,000 children to daily online sexual harassment on Facebook and Instagram, with internal documents revealing incidents, employee concerns, and a lawsuit by the New Mexico attorney general accusing the company of enabling child predators on its platforms Read more.

flag of france
French data protection authority imposed €10 million fine against Yahoo!
CNIL fined Yahoo €10 million for privacy breaches, citing failure to respect user cookie choices and lack of transparent withdrawal process. Investigations revealed non-compliance and advertising cookies without explicit consent. Read more.
flag of france
French data protection authority imposed €10 million fine against Yahoo!
CNIL fined Yahoo €10 million for privacy breaches, citing failure to respect user cookie choices and lack of transparent withdrawal process. Investigations revealed non-compliance and advertising cookies without explicit consent. Read more.

#ReadingCorner
 Art, Graphics, Nature, Night, Outdoors, Text, Pattern

Deloitte’s AI institute conducted a global survey involving 2,800 executives, revealing concerns about businesses’ readiness to handle generative AI more than a year after ChatGPT’s emergence. Executives with deeper AI knowledge express heightened worries about potential impacts, emphasising a need for greater organizational preparedness. The survey indicates gaps in skills readiness, governance, and risk management, with only a fraction feeling highly prepared. A significant knowledge gap exists in educating employees about AI, hindering successful integration.


#ICYMI: Reports from the WEF Annual meeting 2024
placeholder
AI and Digital @ WEF 2024 in Davos
The GIP Digital Watch Observatory provided coverage of the publicly broadcasted World Economic Forum 2024 sessions, focusing on discussions related to AI and digital technologies. Access detailed session reports powered by Diplo’s AI app, and stay tuned for the release of the Final Report next week!
placeholder
AI and Digital @ WEF 2024 in Davos
The GIP Digital Watch Observatory provided coverage of the publicly broadcasted World Economic Forum 2024 sessions, focusing on discussions related to AI and digital technologies. Access detailed session reports powered by Diplo’s AI app, and stay tuned for the release of the Final Report next week!

DW Weekly #142 – 15 January 2024

 Text, Paper, Page

Dear readers,

on 11 January, Microsoft briefly surpassed Apple in market valuation in years. AI and digital dynamics are picking up. There are calls for the protection of the intellectual property of texts, videos and sounds used for the development of AI foundation models such as OpenAI.

In the Council of Europe negotiations on Convention AI, the EU stood against the USA’s request that the convention does not apply to tech companies.

Content governance came into the focus with Türkiye’s constitutional court’s decision and numerous other decisions. World Economic Forum (WEF) highlighted AI-driven misinformation as an immediate threat to democracy and the environment.

This week’s focus will be on discussion in Davos at WEF 2024. Follow our updates.

Based on the survey results we sent out last week, you will receive the weekly digest on Friday afternoon instead of Monday evening.

Stay tuned!


Highlights from the week of 8-12 January 2024

Microsoft vs Apple
Microsoft temporarily surpasses Apple in market valuation
Microsoft’s stocks have experienced an upward trend in recent months, attributed to significant announcements in the field of AI.On the other hand, concerns regarding Apple’s iPhone sales resulted in a shift in the market value. Read more.
Microsoft vs Apple
Microsoft temporarily surpasses Apple in market valuation
Microsoft’s stocks have experienced an upward trend in recent months, attributed to significant announcements in the field of AI.On the other hand, concerns regarding Apple’s iPhone sales resulted in a shift in the market value. Read more.

artificial intelligence ai infographic illustration fantastic computer center
EU challenges US-led bid to exclude private sector from potential international AI treaty
The EU Commission opposes the US proposal, citing concerns about human rights protection in the private sector. Read more.
artificial intelligence ai infographic illustration fantastic computer center
EU challenges US-led bid to exclude private sector from potential international AI treaty
The EU Commission opposes the US proposal, citing concerns about human rights protection in the private sector. Read more.
artificial intelligence concept
Generative AI: concerns arise over peace, security, human rights, democracy, and climate action
Melissa Fleming, UN’s Under-Secretary-General for Global Communications, has expressed worry over generative AI, as it makes it difficult to distinguish between real and AI-generated content. Read more.
artificial intelligence concept
Generative AI: concerns arise over peace, security, human rights, democracy, and climate action
Melissa Fleming, UN’s Under-Secretary-General for Global Communications, has expressed worry over generative AI, as it makes it difficult to distinguish between real and AI-generated content. Read more.

newspapers
US media executives call for legislation on AI content compensation
The coalition of advocates implementing new laws mandating AI developers to compensate publishers for utilising their content. Read more.
newspapers
US media executives call for legislation on AI content compensation
The coalition of advocates implementing new laws mandating AI developers to compensate publishers for utilising their content. Read more.
double exposure creative artificial intelligence abbreviation hologram flag great britain blue sky background future technology ai concept
UK Government publishes response to AI and intellectual property concerns
UK Government responds to concerns on AI developers profiting from private intellectual property without sharing. Abandoning broad copyright exceptions, a detailed response is promised alongside the AI Regulation white paper and Cultural Education Plan. Read more.
double exposure creative artificial intelligence abbreviation hologram flag great britain blue sky background future technology ai concept
UK Government publishes response to AI and intellectual property concerns
UK Government responds to concerns on AI developers profiting from private intellectual property without sharing. Abandoning broad copyright exceptions, a detailed response is promised alongside the AI Regulation white paper and Cultural Education Plan. Read more.


Semiconductors Blog
What is driving ASML’s success in the semiconductor industry?
Firstly, the company has developed a network of suppliers and partners closely resembling Silicon Valley’s ecosystem. Read more.
Semiconductors Blog
What is driving ASML’s success in the semiconductor industry?
Firstly, the company has developed a network of suppliers and partners closely resembling Silicon Valley’s ecosystem. Read more.
nvidia dgx 2 open chassis YS
Nvidia to roll out new AI chip for China amid US export restrictions
The H20 chip is the most advanced of three China-focused chips produced by Nvidia to comply with the current export rules announced by the US government in October. Read more.
nvidia dgx 2 open chassis YS
Nvidia to roll out new AI chip for China amid US export restrictions
The H20 chip is the most advanced of three China-focused chips produced by Nvidia to comply with the current export rules announced by the US government in October. Read more.

ai brain intelligent ai technology digital graphic design electronics ai machine learning robot human brain science artificial intelligence technology innovation futuristic
Davos report marks AI misinformation as an immediate threat to democracy and environment
In its Global Risks Report, the organisation warns that technological advances are exacerbating the problem of misinformation and disinformation, highlighting the use of generative AI chatbots in creating manipulative synthetic content. Read more.
ai brain intelligent ai technology digital graphic design electronics ai machine learning robot human brain science artificial intelligence technology innovation futuristic
Davos report marks AI misinformation as an immediate threat to democracy and environment
In its Global Risks Report, the organisation warns that technological advances are exacerbating the problem of misinformation and disinformation, highlighting the use of generative AI chatbots in creating manipulative synthetic content. Read more.
ecuador
Violent videos emerge on X amid Ecuador’s ‘internal armed conflict’
Shortly after gunmen stormed a TV station, President Noboa issued a decree designating 20 drug trafficking gangs operating in the country as terrorist organisations. Read more.
ecuador
Violent videos emerge on X amid Ecuador’s ‘internal armed conflict’
Shortly after gunmen stormed a TV station, President Noboa issued a decree designating 20 drug trafficking gangs operating in the country as terrorist organisations. Read more.
content concept laptop screen scaled 1
Substack removes five newsletters amid criticism of Nazi content
The decision follows a review finding violations of Substack’s content rules, prompting a commitment to improve reporting tools. Read more.
content concept laptop screen scaled 1
Substack removes five newsletters amid criticism of Nazi content
The decision follows a review finding violations of Substack’s content rules, prompting a commitment to improve reporting tools. Read more.
woman interacting with cell phone opening meta app 1
Allegations against Meta: corporate ads on Facebook and Instagram linked to child exploitation content
The New Mexico attorney general’s lawsuit claims that Meta fails to prevent the exploitation of minors on its platforms, following The Guardian’s April investigation highlighting the company’s struggle in curbing child trafficking. Read more.
woman interacting with cell phone opening meta app 1
Allegations against Meta: corporate ads on Facebook and Instagram linked to child exploitation content
The New Mexico attorney general’s lawsuit claims that Meta fails to prevent the exploitation of minors on its platforms, following The Guardian’s April investigation highlighting the company’s struggle in curbing child trafficking. Read more.
turkey table flag on white textured wall
Türkiye’s constitutional court rules internet content-blocking provisions violate constitution
The Turkish Constitutional Court declared unconstitutional two amendments in the internet regulation law, stating they violated freedom of expression. Read more.
turkey table flag on white textured wall
Türkiye’s constitutional court rules internet content-blocking provisions violate constitution
The Turkish Constitutional Court declared unconstitutional two amendments in the internet regulation law, stating they violated freedom of expression. Read more.

waygsck20h8
Chinese tech company cracks encryption of Apple’s AirDrop
A Chinese tech company called Wangshendongjian Technology has successfully cracked the encryption around Apple’s AirDrop wireless file-sharing function, enabling them to identify users of the feature. The company helped… Read more.
waygsck20h8
Chinese tech company cracks encryption of Apple’s AirDrop
A Chinese tech company called Wangshendongjian Technology has successfully cracked the encryption around Apple’s AirDrop wireless file-sharing function, enabling them to identify users of the feature. The company helped… Read more.
price pump
The US SEC account on X was hacked and spread fake news that crashed crypto market
The official US Securities and Exchange Commission (US SEC) account on the X social network was hacked, and a fake message was posted that crashed the cryptocurrency market. Read more.
price pump
The US SEC account on X was hacked and spread fake news that crashed crypto market
The official US Securities and Exchange Commission (US SEC) account on the X social network was hacked, and a fake message was posted that crashed the cryptocurrency market. Read more.

#ReadingCorner
Human hand extends its index finger to touch the index finger of a robotic hand.

On 11 January, the US Government Accountability Office (GAO) issued a report on cyber diplomacy, which examines the activities of State, including the use of international agreements and forums, and evaluates the effectiveness of organizational changes in achieving cyber diplomacy goals.

GAO’s reports include three major challenges for the US cyber diplomacy: the lack of a universally agreed-upon definition of cyber diplomacy, clarifying the roles of different US agencies in this domain, and building capacity with individuals skilled in both technical cybersecurity issues and diplomacy.


This week: WEF Annual meeting 2024
placeholder
AI and Digital @ WEF 2024 in Davos
The GIP Digital Watch Observatory provided coverage of the publicly broadcasted World Economic Forum 2024 sessions, focusing on discussions related to AI and digital technologies. Access detailed session reports powered by Diplo’s AI app, and stay tuned for the release of the Final Report next week!
placeholder
AI and Digital @ WEF 2024 in Davos
The GIP Digital Watch Observatory provided coverage of the publicly broadcasted World Economic Forum 2024 sessions, focusing on discussions related to AI and digital technologies. Access detailed session reports powered by Diplo’s AI app, and stay tuned for the release of the Final Report next week!

DW Weekly #141 – 10 January 2024

DigWatch Weekly Digest Banner

Dear readers,

Welcome back to the Digital Watch space! We wish you a prosperous and happy 2024!

In the coming weeks, we will fine-tune DW Digest to reflect the rapid changes in AI and the digital space. We’d love to know what you’d like to see in DW Digests. Would you prefer to receive it on Friday afternoon instead of Monday morning? Would you like to receive daily updates as well? Please take 2 minutes of your time to share your suggestions for the future of DW Digest.

The first weekly digest in 2024 is released at the beginning of yet another year of high uncertainties. As we discussed in our AI and digital predictions for 2024, broader geopolitical and societal tensions, ranging from China-USA relations to a series of major elections around the world, will have a significant impact on the tech sector. The main question is whether AI and digital technology will alleviate or exacerbate societal polycrisis.

The first week of the year provided a glimpse into AI governance, with OpenAI responding to the New York Times’ lawsuit and Italy’s G7 presidency prioritising AI and Africa.

Since the beginning of the year, new developments have included online safety for children and teenagers, semiconductor geopolitics, and cybersecurity.

We invite you to join 2024 AI and digital predictions event on 11 January.

E-see you soon!

Digital Watch Team


Artificial Intelligence (AI)

OpenAI responded to The New York Times’ copyright lawsuit, defending its use of copyrighted material in AI development. The company argued that LLMs cannot be created without using copyrighted materials. OpenAI also intends to shift data of its European customers to Ireland as ‘a GDPR-friendly jurisdiction’.

The US Securities and Exchange Commission ruled against Apple’s and Disney’s attempts to avoid shareholder votes on the companies’ uses of AI, after a labour groups requested reports on such uses. This decision represents a significant step towards increased corporate transparency and responsibility in AI applications.

Africa and AI will be top priorities for Italy’s G7 presidency. This initiative reflects a strategic approach to addressing global challenges, with a focus on the role of AI in international relations and development.


Semiconductors

On 1 January 2024, new restrictions for export of semiconductors manufacturing equipment came into the force in the Netherlands. The Dutch government partially revoked an export license that was allowing ASML, a leading manufacturer of the semiconductor equipment, to ship some of its equipment to China.


Cybersecurity

As the Russian-Ukraine war has escalated since the start of 2024, Russian hackers targeted Ukraine’s largest telecom provider.


legal advice and counseling for digital technologies laws business and intellectual property

AI is now available for judges in England and Wales to aid in crafting legal rulings, with the caveat that its use is restricted to assisting in writing legal opinions…

chatgpt chat with ai artificial intelligence woman chatting with smart ai artificial intelligence using artificial intelligence chatbot developed by openai scaled

OpenAI stands by the crucial role of copyrighted material in advancing AI like ChatGPT. Facing legal heat, the company underscores the necessity of this content for modern AI models.

OpenAi

ChatGPT’s owner, OpenAI, has responded to a copyright lawsuit by The New York Times, stating that the case is ‘without merit’ and expressing its hope for a partnership with the…

3d render artificial intelligence logo deep learning blockchain neural network concept generative ai

The AI sector in 2024 is at a crossroads with existing copyright laws, particularly in the US. The legal system’s reaction to these challenges will be critical in striking the…

internet access and technology on mobile

Governments across the globe continue to use internet censorship to address internet-related challenges.

a large amount of electronic waste stockpack istock scaled

The projects aim to address extreme heat-related deaths and battery waste management in Tamil Nadu. The Minister will highlight the strong UK-India trade partnership and explore opportunities for collaboration in…

electric vehicle charging station point

The new directive issued by China’s National Development and Reform Commission (NDRC) on 4 January demands the development of preliminary technical standards to guide the integration of new energy vehicles…

children use laptop learning online

The law, effective from January 15, under the Social Media Parental Notification Act, aims to ensure the well-being of youth mental health and provide parents with greater authority over their…

robber wearing black hoodie against digitally generated russian national flag

The attack, attributed to the Russian military intelligence cyberwarfare unit Sandworm, disrupted Kyivstar’s services, with over 24.3 million customers losing phone reception.

G7 and AI

Prime Minister Giorgia Meloni outlined Italy’s G7 priorities, emphasising support for African development and addressing AI challenges.

moscow russia october 22 2021 smartphone with google applications against background google site open browser

Google has started testing changes to its Chrome browser that disable third-party cookies, initially made available to 1% of global users.

waygsck20h8

The US Securities and Exchange Commission (SEC) ruled against Apple and Disney’s attempts to exclude shareholder votes on AI proposed by the AFL-CIO labor group. AFL-CIO had requested transparency on…


History of Diplomacy and Technology COVER EB 1 copy
History of Diplomacy and Technology: From Smoke Signals to Artificial Intelligence – Diplo Resource
‘History of Diplomacy and Technology’ remind us that every ‘latest’ technology has promised to transform diplomacy. Some changes occurred, but the essence of diplomacy remained constant: the peaceful resolution of… Read more
History of Diplomacy and Technology COVER EB 1 copy
History of Diplomacy and Technology: From Smoke Signals to Artificial Intelligence – Diplo Resource
‘History of Diplomacy and Technology’ remind us that every ‘latest’ technology has promised to transform diplomacy. Some changes occurred, but the essence of diplomacy remained constant: the peaceful resolution of… Read more

Numéro 85 de la lettre d’information Digital Watch – décembre 2023

 Advertisement, Poster, Page, Text, Person, Head, Face

Observatoire

Coup d’œil : quelles sont les nouveautés en matière de politique numérique ?

Gouvernance de l’IA

Google et Anthropic ont annoncé un partenariat élargi prévoyant des efforts conjoints sur les normes de sécurité de l’IA, l’engagement à respecter les normes de sécurité de l’IA les plus élevées et l’utilisation de puces TPU pour le traitement des données de l’IA.

Google a dévoilé « The AI Opportunity Agenda », qui propose des lignes directrices aux décideurs politiques, aux entreprises et aux sociétés civiles afin qu’ils collaborent à l’adoption de l’IA et à l’exploitation de ses avantages.

L’OCDE a lancé l’Observatoire des politiques relatives à l’IA, qui propose une analyse complète des politiques et des données sur les incidents liés à l’IA, mettant en lumière les impacts de l’IA afin de contribuer à l’élaboration de stratégies avisées en matière d’IA. Le président américain Joe Biden et le président chinois Xi Jinping, en marge de la semaine des dirigeants de la Coopération économique pour l’Asie-Pacifique (APEC), ont convenu de la nécessité « d’aborder les risques des systèmes d’IA avancés et d’améliorer la sécurité de l’IA dans le cadre de discussions entre le gouvernement américain et le gouvernement chinois ».

L’autorité italienne de protection des données (DPA) a lancé une enquête pour déterminer si les plateformes en ligne ont mis en œuvre des mesures suffisantes afin d’empêcher les plateformes d’IA de récupérer des données personnelles pour former des algorithmes d’IA.

Le Conseil fédéral suisse a chargé le Département de l’environnement, des transports, de l’énergie et de la communication (DETEC) de présenter, d’ici fin 2024, une synthèse des approches réglementaires possibles en matière d’IA. L’objectif du Conseil est d’utiliser cette analyse comme base pour une proposition de réglementation de l’IA en 2025.

Technologies 

Yangtze Memory Technologies Co (YMTC), le plus grand fabricant chinois de puces mémoire, a intenté une action en justice contre Micron Technology et sa filiale pour violation de huit brevets. Dans le cadre du Conseil du commerce et de la technologie (CCT) UE-Inde, l’UE et l’Inde ont signé un protocole d’accord sur les modalités de travail dans l’écosystème des semi-conducteurs, sa chaîne d’approvisionnement et l’innovation. Les fabricants de taxis aériens Joby Aviation et Volocopter ont présenté leurs avions électriques à New York. Amazon a présenté Q, un chatbot piloté par l’IA et conçu pour ses clients Amazon Web Services, Inc. (AWS), qui constitue une solution polyvalente répondant aux besoins en matière d’intelligence économique et de programmation.

Sécurité

Le Royaume-Uni, les États-Unis et 16 autres partenaires ont publié les premières lignes directrices mondiales visant à renforcer la cybersécurité tout au long du cycle de vie d’un système d’intelligence artificielle. Ces lignes directrices couvrent quatre domaines clés du cycle de vie du développement d’un système d’IA : la sécurisation de la conception, du développement, du lancement, de l’exploitation et de la maintenance.

Le Parlement européen et le Conseil de l’UE sont parvenus à un accord politique sur la loi relative à la cyber-résilience. L’accord va maintenant être soumis à l’approbation formelle du Parlement et du Conseil.

Infrastructure

Le Gigabit Infrastructure Act (GIA) de l’UE connaît une modification importante, puisque le « principe d’approbation tacite », conçu pour accélérer le déploiement des réseaux à large bande, a été exclu du dernier texte de compromis diffusé par la présidence espagnole du Conseil de l’UE. L’ICANN a lancé son service de demande de données d’enregistrement (RDRS) afin de simplifier les demandes d’accès aux données d’enregistrement non publiques relatives aux domaines génériques de premier niveau (gTLD).

L’Union internationale des télécommunications (UIT) a adopté la résolution 65 de l’UIT-R, qui vise à orienter l’élaboration d’une norme 6G. Cette résolution permet de réaliser des études sur la compatibilité des réglementations actuelles avec les technologies d’interface radio des télécommunications mobiles internationales (IMT) de la sixième génération pour 2030 et au-delà. 

Le gouvernement indien a lancé son référentiel mondial des infrastructures publiques numériques et créé le Fonds d’impact social pour faire progresser les infrastructures publiques numériques dans le Sud, dans le cadre des initiatives du G20.

Juridique

Le Conseil de l’UE a adopté la loi sur les données, qui définit les principes d’accès, de portabilité et de partage des données pour les utilisateurs de produits IdO. OpenAI a lancé le Copyright Shield, un programme qui couvre les frais juridiques de ses clients commerciaux confrontés à des plaintes pour violation de droits d’auteur à la suite de l’utilisation de la technologie d’IA d’OpenAI.

Économie de l’Internet

Apple, TikTok et Meta ont fait appel pour contester leur catégorisation en tant que « garde-barrière » en vertu de la loi européenne sur les marchés numériques (DMA), qui vise à permettre la mobilité des utilisateurs entre des services rivaux tels que les plateformes de médias sociaux et les navigateurs web. À l’inverse, Microsoft et Google ont choisi de ne pas contester l’étiquette de « garde-barrière ». Le ministère des Finances des États-Unis a conclu un accord record de 4,2 milliards de dollars avec Binance, la plus grande Bourse d’échange de monnaies virtuelles au monde, pour non-respect des lois contre le blanchiment d’argent et les sanctions, imposant une période de surveillance de cinq ans et des mesures rigoureuses de mise en conformité. Le régulateur australien a demandé une nouvelle loi sur la concurrence pour les plateformes numériques en raison de leur influence croissante.

Droits numériques

La Cour de justice de l’UE (CJUE) a statué que les personnes concernées ont le droit de faire appel de la décision de l’autorité de contrôle nationale concernant le traitement de leurs données personnelles.

Politique de contenu

Le Népal a décidé d’interdire TikTok, invoquant la dégradation de la cohésion sociale causée par l’utilisation abusive de l’application vidéo populaire. YouTube a introduit une nouvelle politique qui oblige les créateurs à divulguer l’utilisation de l’IA générative. OpenAI et Anthropic ont rejoint l’appel à l’action de Christchurch, un projet lancé par le président français Emmanuel Macron et la Première ministre néo-zélandaise Jacinda Ardern pour supprimer les contenus terroristes. X (anciennement Twitter) est dans le collimateur de la Commission européenne pour avoir beaucoup moins de modérateurs de contenu que ses rivaux.

Développement

Le rapport « Faits et chiffres 2023 » de l’UIT révèle des progrès inégaux en matière de connectivité à l’Internet dans le monde, ce qui exacerbe les disparités de la fracture numérique, en particulier dans les pays à faible revenu. La Suisse a annoncé des projets pour un nouveau système d’identité numérique géré par l’État, dont le lancement est prévu en 2026, après que les électeurs ont rejeté une initiative privée en 2021 en raison de craintes liées à la protection des données personnelles. Le ministère indonésien de la Communication et de l’Information a introduit une nouvelle politique en matière d’identité numérique, qui exigera plus tard que tous les citoyens disposent d’une carte d’identité numérique.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Le Forum économique mondial (WEF) a tenu sa réunion annuelle sur la cybersécurité 2023 du 13 au 15 novembre, réunissant plus de 150 experts de premier plan en matière de cybersécurité. Basée sur le rapport Global Security Outlook 2023 du WEF publié en janvier 2023, la réunion annuelle a permis aux experts d’aborder les cyber-risques croissants avec des approches stratégiques et systémiques, et des collaborations multipartites.

Le 12e Forum des Nations unies sur les entreprises et les droits de l’Homme s’est déroulé du 27 au 29 novembre, et s’est concentré sur les changements réels apportés par les États et les entreprises pour mettre en œuvre les principes directeurs des Nations unies relatifs aux entreprises et aux droits de l’Homme (UNGP). Parmi les sujets abordés figuraient l’amélioration de la mise en œuvre des droits des personnes handicapées grâce aux progrès des technologies d’assistance, de l’IA et de la numérisation, ainsi que d’autres systèmes de soins et d’aide. 

Organisé conjointement avec le 12e Forum des Nations unies sur les entreprises et les droits de l’Homme, le Sommet B-Tech Generative du 30 novembre a examiné la mise en œuvre d’une diligence raisonnable en matière de droits de l’Homme lors de la mise en pratique de l’IA. Ce sommet d’une journée a présenté les documents du projet B-Tech sur les droits de l’Homme et l’IA générative, et a permis à toutes les parties prenantes de discuter de l’utilisation pratique des principes directeurs des Nations unies relatifs aux entreprises et aux droits de l’Homme (UNGP), et d’autres approches basées sur les droits de l’Homme dans l’analyse des impacts de l’IA générative.

En bref

Les quatre saisons de l’IA

ChatGPT, une innovation révolutionnaire d’OpenAI lancée le 30 novembre 2022, a non seulement captivé le monde de la technologie, mais a également marqué l’histoire de l’IA. Le premier anniversaire de ChatGPT nous incite à prendre un peu de recul pour réfléchir au chemin parcouru et à ce qui nous attend. 

Un voyage symbolique à travers les saisons a servi de toile de fond à la trajectoire de l’IA depuis novembre dernier. La frénésie hivernale a permis une adoption rapide par les utilisateurs, dépassant même les géants des médias sociaux par sa rapidité. En 64 jours, ChatGPT a atteint le chiffre stupéfiant de 100 millions d’utilisateurs, un exploit qu’Instagram, par exemple, a mis 75 jours à réaliser. L’intérêt soudain pour l’IA générative a pris les grandes entreprises technologiques par surprise. Outre ChatGPT, plusieurs autres modèles d’IA générative notables, tels que Midjourney, Stable Diffusion et Google’s Bard, ont été publiés.

Le printemps des métaphores qui a succédé a donné lieu à une vague de comparaisons et de discussions imaginatives sur la gouvernance de l’IA. Des descriptions anthropomorphiques et des scénarios apocalyptiques ont vu le jour, reflétant les tentatives de la société de faire face aux implications de l’IA avancée.

Alors que ChatGPT entrait dans son été de réflexion contemplative, une période d’introspection est née. S’inspirant des philosophies anciennes et des différents contextes culturels, le discours s’est élargi au-delà des simples avancées technologiques. L’exploration de la sagesse de la Grèce antique, à Confucius, de l’Inde, et du concept Ubuntu en Afrique, a cherché des réponses aux défis complexes posés par l’IA, allant au-delà des simples solutions technologiques.

Aujourd’hui, en cet automne de lucidité, le battage médiatique initial s’est apaisé, laissant place à des formulations politiques précises. L’IA a trouvé sa place dans les agendas des parlements nationaux et des organisations internationales. Dans les documents d’orientation de divers groupes tels que le G7, le G20, le G77 et les Nations unies, l’équilibre entre les opportunités et les risques s’est déplacé vers une plus grande concentration sur les risques. Les menaces existentielles à long terme de l’IA ont occupé le devant de la scène lors de conférences telles que le Sommet de Londres sur l’IA, les propositions de gouvernance s’inspirant d’entités telles que l’Agence internationale de l’énergie atomique (AIEA), le CERN et le Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC).

Qu’est-ce qui nous attend ? Nous devrions nous concentrer sur les deux principales questions qui émergent : comment aborder les risques liés à l’IA et quels sont les aspects de l’IA qui devraient être régis ?

Pour gérer les risques liés à l’IA, il est essentiel, pour concevoir des réglementations efficaces, de bien comprendre les trois catégories suivantes : les risques à court terme, les risques à moyen terme et les risques à long terme. Si les risques à court terme, tels que la perte d’emploi et la protection des données, sont familiers et peuvent être traités avec les outils existants, les risques à moyen terme impliquent des monopoles potentiels contrôlant les connaissances de l’IA, ce qui exige une attention particulière pour éviter les scénarios dystopiques. Les risques à long terme, qui englobent les menaces existentielles, dominent le discours public et l’élaboration des politiques, comme le montre la déclaration de Bletchley. Pour naviguer dans le débat sur la gouvernance de l’IA, il est nécessaire d’aborder les risques de manière transparente et de hiérarchiser les décisions en fonction des réactions de la société.

En ce qui concerne la gouvernance des aspects de l’IA, les discussions actuelles tournent autour du calcul, des données, des algorithmes et des applications. Les aspects liés au calcul impliquent la course au matériel puissant, avec des implications géopolitiques entre les États-Unis et la Chine. Les données, souvent appelées « le pétrole de l’IA », exigent une transparence accrue quant à leur utilisation. La gouvernance algorithmique, axée sur les risques à long terme, se concentre sur la pertinence des paramètres de mesure dans les modèles d’IA. Au niveau des applications et des outils, le passage actuel d’une réglementation algorithmique à une réglementation axée sur les applications pourrait avoir un impact significatif sur les progrès technologiques. Les débats négligent souvent la gouvernance des données et des applications, des domaines détaillés dans la réglementation, mais non alignés sur les intérêts des entreprises technologiques.

Ce texte s’inspire de la série de blogs Recycling Ideas, du Dr J. Kurbalija. Il s’agit d’une série de textes, de concepts, de traditions et de réflexions visant à construire un contrat social adapté à l’ère de l’IA.

Les législateurs de l’UE s’affrontent au sujet de la loi sur l’IA

Après plus de 22 heures de négociations en trilogue dans l’UE les 6 et 7 décembre, portant sur un ordre du jour de 23 points, l’accord relatif à la loi sur l’IA reste difficile à atteindre. Voici ce que les rapports indiquent.

Types de fondations. Les négociations ont connu un sérieux revers lorsque la France, l’Allemagne et l’Italie se sont opposées à l’approche par paliers initialement envisagée dans la loi européenne sur l’IA pour les modèles de base (modèles de base pour les développeurs). L’approche par paliers consisterait à classer l’IA dans différentes catégories de risques, avec une réglementation plus ou moins stricte en fonction du niveau de risque. La France, l’Allemagne et l’Italie souhaitent réglementer uniquement l’utilisation de l’IA plutôt que la technologie elle-même, car elles veulent s’assurer que l’innovation en matière d’IA dans l’UE ne soit pas freinée. Ils ont proposé une « autorégulation obligatoire par le biais de codes de conduite » pour les modèles de fondation. Les fonctionnaires du Parlement européen ont interrompu une réunion pour signifier qu’il n’était pas politiquement acceptable d’exclure les modèles de fondation du champ d’application de la loi.

Selon un document de synthèse examiné par Euractiv, l’approche différenciée a été conservée dans le texte de la loi. Toutefois, la législation ne s’appliquerait pas aux systèmes d’IA à usage général (GPAI) proposés sous des licences libres et gratuites. Cette exemption peut être annulée si le modèle à code source ouvert est utilisé à des fins commerciales. Dans le même temps, les législateurs ont convenu que les codes de conduite serviront de lignes directrices supplémentaires jusqu’à ce que les normes techniques soient harmonisées.

Selon l’accord préliminaire, tout modèle formé à l’aide d’une puissance de calcul supérieure à 10^25 opérations en virgule flottante (FLOP) sera automatiquement considéré comme présentant des risques systémiques. Ces modèles seront soumis à de nombreuses obligations, notamment en matière d’évaluation, d’appréciation des risques, de cybersécurité et de consommation d’énergie.

Un bureau européen de l’IA sera créé au sein de la Commission pour faire appliquer les règles fondamentales du modèle, les autorités nationales supervisant les systèmes d’IA par l’intermédiaire du Bureau européen de l’intelligence artificielle.

Le Conseil de l’intelligence économique (EAIB) veillera à l’application correcte de la loi. Un forum consultatif recueillera les réactions des parties prenantes. Un groupe scientifique d’experts indépendants donnera des conseils sur l’application de la loi, identifiera les risques systémiques et contribuera à la classification des modèles d’IA.

Sujets à controverse. Si une dizaine de questions restent en suspens, les principaux obstacles concernent les interdictions, l’identification biométrique à distance et les exemptions pour raisons de sécurité nationale.

Interdictions. Jusqu’à présent, les législateurs se sont provisoirement mis d’accord sur les interdictions des techniques de manipulation, des systèmes exploitant les vulnérabilités, de l’évaluation sociale et de l’extraction sans discernement d’images faciales. Dans le même temps, le Parlement européen a proposé une liste beaucoup plus longue de pratiques interdites et se heurte à une forte résistance de la part du Conseil.

Identification biométrique à distance. Les membres du Parlement européen (MEP) font pression pour une interdiction générale des systèmes de catégorisation biométrique basés sur des traits personnels délicats, y compris la race, les opinions politiques et les croyances religieuses. Dans le même temps, les États membres font pression pour obtenir des dispenses leur permettant d’utiliser la surveillance biométrique en cas de menace pour la sécurité nationale.
Exemptions au titre de la sécurité nationale. La France, en tête des pays de l’UE, plaide pour une large exemption de la sécurité nationale dans les réglementations sur l’IA, en soulignant la discrétion des États membres dans les domaines de la défense, de l’armée et de la sécurité nationale. Toutefois, cette proposition se heurtera probablement à la résistance des législateurs progressistes, qui préconiseront probablement une interdiction pure et simple.

Mise à jour : après 36 heures de négociations réparties sur trois jours (dont 22 consécutifs), un accord provisoire a finalement été conclu. Découvrez les détails dans notre Hebdo #139.

In focus

Des enjeux plus importants dans la course à l’IAG ?

Le suspens autour de la saga du mois de novembre d’OpenAI a été tout simplement captivant, et nous avons été au cœur de l’action, en suivant chaque rebondissement.

En résumé, le P.-D. G. d’OpenAI, Sam Altman, a été démis de ses fonctions parce qu’il « n’était pas toujours franc dans ses communications » avec le conseil d’administration. La plupart des employés d’OpenAI, environ 700 sur 750, ont exprimé leur intention de quitter l’entreprise et de rejoindre M. Altman chez Microsoft, ce qui a entraîné sa réintégration en tant que P.-D. G. En outre, le conseil d’administration d’OpenAI a remplacé certains de ses membres.

Les rapports (et les spéculations) sur Q* se sont rapidement répandus. Selon Reuters, M. Altman a été licencié en partie à cause de Q*, un projet d’IA prétendument si puissant qu’il pourrait menacer l’humanité.

Q* est censé pouvoir résoudre certains problèmes mathématiques. Bien que ses prouesses soient du niveau des élèves de l’école primaire (les six ou huit premières années), il pourrait s’agir d’une percée potentielle dans le domaine de l’intelligence artificielle générale (AGI), car cela suggère une capacité de raisonnement plus élevée. OpenAI considère l’AGI comme une IA visant à surpasser les capacités humaines dans des tâches à valeur économique.

À son retour en tant que P.-D. G., M. Altman a déclaré à propos de Q* : « Pas de commentaire particulier sur cette malheureuse fuite. »

La nouvelle a fait grand bruit, beaucoup se demandant ce qu’est exactement Q*, et si cela existe vraiment. Certains observateurs avisés pensent que Q* pourrait être lié à un projet d’OpenAI du mois de mai, qui se vantait de la « supervision de processus », une technique qui entraîne les modèles d’IA à résoudre les problèmes étape par étape.

Certains pensent que le projet Q* pourrait combiner l’apprentissage-Q (c’est-à-dire un type d’apprentissage par renforcement dans lequel un modèle apprend de manière itérative et s’améliore au fil du temps en étant récompensé pour avoir pris la bonne décision) avec un algorithme que les ordinateurs peuvent utiliser pour déterminer comment se rendre rapidement d’un point à un autre (recherche A*).

D’autres ont avancé que le nom Q* pourrait faire référence à l’algorithme de recherche Q*, qui a été développé pour contrôler les recherches déductives dans un système expérimental.

Google entre dans la course. Début décembre, Google a lancé Gemini, un modèle d’IA qui, selon la firme de Mountain View, a surpassé les experts humains dans le domaine de la compréhension du langage multitâche massif ; une mesure conçue pour évaluer les connaissances de l’IA en matière de mathématiques, d’histoire, de droit et d’éthique. Ce modèle serait capable de surpasser GPT-4 en mathématiques de niveau primaire. Google a toutefois refusé de commenter le nombre de paramètres de Gemini.

Tout cela est-il vraiment lié à l’AGI ? C’est difficile à dire. D’une part, l’IA qui dépasse les capacités humaines semble être une dystopie (pourquoi personne ne pense-t-il jamais qu’il pourrait s’agir d’une utopie ?). D’autre part, les experts affirment que même si une IA pouvait résoudre des équations mathématiques, cela ne se traduirait pas nécessairement par des percées plus importantes de l’AGI.

Quel est l’objet réel de toutes ces spéculations ? La transparence – et pas uniquement chez OpenAI et Google. Nous devons comprendre qui (ou quoi) façonnera notre avenir. Sommes-nous les acteurs principaux ou de simples spectateurs qui attendent de voir ce qui va se passer ?

 Art, Doodle, Drawing, Person, Animal, Bird, Penguin, Head, Face, Canine, Dog, Mammal, Pet

Équilibrer le discours en ligne : lutter contre la haine tout en préservant la liberté

La bataille en cours sur la prévention et la lutte contre les discours haineux en ligne tout en garantissant la protection de la liberté d’expression a amené l’Agence des droits fondamentaux de l’UE (FRA) à préconiser une « modération appropriée et précise des contenus ».

La FRA a publié un rapport sur les défis liés à la détection des discours de haine en ligne contre les personnes originaires d’Afrique, les Juifs, les Roms et d’autres personnes sur les plateformes numériques, notamment Telegram, X (anciennement connu sous le nom de Twitter), Reddit et YouTube. Des données ont été recueillies auprès de la Bulgarie, de l’Allemagne, de l’Italie et de la Suède afin de fournir une analyse comparative basée sur leurs politiques nationales actuelles. La FRA a appelé les régulateurs et les plateformes numériques à garantir un espace plus sûr pour les personnes d’ascendance africaine, les juifs et les Roms, car il a été constaté qu’ils subissent des niveaux très élevés de discours de haine et de cyberharcèlement. En outre, la FRA a attiré l’attention sur la nécessité d’une réglementation efficace en matière de modération des contenus pour les femmes, car les niveaux d’incitation à la violence à leur encontre sont plus élevés que dans d’autres groupes.

image 33

Le DSA est-il suffisant pour garantir la modération du contenu dans l’UE ? Bien que la loi sur la sécurité numérique (DSA) soit considérée comme un grand pas en avant dans la lutte contre le discours haineux en ligne, la FRA affirme que ses effets ne sont pas encore visibles. Selon la FRA, il est nécessaire de clarifier ce qui est considéré comme un discours de haine, notamment en formant les forces de l’ordre, les modérateurs de contenu et les indicateurs aux normes juridiques permettant d’identifier les discours de haine. Cette formation devrait également permettre de s’assurer que les plateformes ne suppriment pas trop de contenu.

Directives de l’UNESCO. La directrice générale de l’UNESCO, Audrey Azoulay, a tiré la sonnette d’alarme face à la montée de la désinformation et des discours de haine en ligne, les qualifiant de « menace majeure pour la stabilité et la cohésion sociale ». En conséquence, l’UNESCO a publié des lignes directrices pour la gouvernance des plateformes numériques afin de lutter contre la désinformation et les discours de haine en ligne tout en protégeant la liberté d’expression. Ces lignes directrices prévoient notamment la mise en place de régulateurs publics indépendants dans les pays du monde entier, la présence de modérateurs de différentes langues sur les plateformes numériques, la priorité à la transparence dans le financement des médias et la promotion de la pensée critique.

L’importance de la société civile. Depuis le début de la guerre israélo-palestinienne, les messages sur la Palestine et les suppressions de contenu ont atteint une « ampleur sans précédent », a déclaré Jillian York, directrice de la liberté d’expression internationale à l’Electronic Freedom Foundation (EFF). Plusieurs groupes palestiniens de défense des droits de l’Homme ont donc lancé la pétition « Meta: Let Palestine Speak » (Laissez la Palestine s’exprimer), demandant au géant de la technologie de remédier à la suppression injuste de contenus palestiniens.
Et, bien sûr, l’IA. Comme l’indique le rapport de la FRA, l’évaluation du contenu basée sur les personnes utilise souvent des paramètres biaisés et discriminatoires. Toutefois, cela ne signifie pas que l’on puisse empêcher l’IA de le faire, comme le montre la traduction automatique de Meta, qui a appliqué le terme « terroriste » aux utilisateurs palestiniens dont la biographie contenait une phrase en arabe, et pour laquelle Meta a présenté des excuses publiques en octobre 2023.

Lancement du Geneva Manual sur le comportement responsable dans le cyberespace

Le Geneva Manual récemment publié se concentre sur les rôles et les responsabilités des parties prenantes non étatiques dans la mise en œuvre de deux normes cybernétiques des Nations unies relatives à la sécurité de la chaîne d’approvisionnement et au signalement responsable des failles dans les technologies de l’information et de la communication.

Le manuel a été rédigé par le Dialogue de Genève sur le comportement responsable dans le cyberespace, une initiative établie par le Département fédéral suisse des affaires étrangères et dirigée par DiploFoundation, avec le soutien de la République et de l’État de Genève, du Center for Digital Trust (C4DT) de l’École polytechnique fédérale de Lausanne (EPFL), de Swisscom et de l’UBS.

Le Dialogue de Genève prévoit d’étendre le manuel en étudiant la mise en œuvre de normes supplémentaires dans les années à venir.
Le manuel est un document vivant, ouvert à la participation et à l’enrichissement. Visitez le site genevadialogue.ch pour faire part de vos réflexions, de vos idées et de vos suggestions, alors que nous traçons la voie vers un cyberespace plus sûr et plus stable.

Cartoon schematic shows a flow from a contentions oral meeting on cyber norms to a more detailed research analysis with exclamations changing to realisation and agreement, to a summary schematic superimposed over a turtle, alongside a report on cyber norms and a book with a bookmark labelled Geneva manual, ending in a drawing of a turtle with a world globe on its back, with the word ‘Secure!’

Actualités de la Francophonie

 Logo, Text

Le Réseau francophone des régulateurs des médias (REFRAM) réuni à Nouakchott les 16 et 17 novembre 2023 sur le thème « l’audiovisuel à l’ère du numérique : acquis et défis »

La Haute Autorité de la presse et de l’audiovisuel de Mauritanie (HAPA) a organisé les 16 et 17 novembre 2023 un colloque sur « l’audiovisuel à l’ère du numérique : acquis et défis ». Cette rencontre réunissait des représentants de haut niveau des régulateurs membres du REFRAM, et des experts venant de 11 pays. 

La conférence a été introduite par Houssein OULD MEDDOU, président de la HAPA, hôte de l’événement, suivie par une allocution vidéo de Roch-Olivier MAISTRE, le président du REFRAM et de l’Arcom, retenu à Paris et représenté par Hervé Godechot, membre du Collège.

Le forum a été le lieu de réflexion où les acteurs du domaine de l’audiovisuel ont pu discuter des nouvelles méthodes nécessaires pour s’adapter à l’évolution technologique rapide de l’industrie numérique. Les travaux étaient répartis sur les deux journées en cinq sessions. La première journée du 16 novembre a été, outre la traditionnelle session d’ouverture, divisée en trois sessions thématiques. La première a abordé les questions de diffusion et distribution des contenus audiovisuels ; la deuxième s’est plus particulièrement intéressée aux grands enjeux techniques, économiques et culturels. Une troisième session consacrée à un retour d’expériences de plusieurs régulateurs en matière télévisuelle est venue conclure cette première journée de travaux.

La matinée du 17 novembre s’est ouverte par une session relative aux relations entre régulateurs et opérateurs. Elle s’est poursuivie par une session sur les stratégies à adopter pour entretenir un espace audiovisuel pluraliste, notamment au travers d’une réglementation adaptée et réactualisée, face à l’évolution de l’offre numérique.

Lors de la session de clôture, les membres se sont félicités de la richesse des travaux et de l’excellence de l’organisation et de l’accueil réservé par la HAPA et les autorités mauritaniennes.

En marge de la Conférence, des chefs de délégations des autorités membres du réseau ont eu l’honneur d’être reçus vendredi 17 novembre par Son Excellence Monsieur Mohamed Ould Cheikh El Ghazouani, président de la République de Mauritanie. Lors de cette audience, Mme Latifa Akharbach, présidente de la Haute Autorité de la communication audiovisuelle du Maroc et présidente du Réseau des instances africaines de la communication (RIARC), a mis en exergue la mobilisation des régulateurs africains pour la garantie du droit du citoyen à l’accès à l’information et à la liberté d’expression en même temps que la protection des sociétés contre la désinformation, les théories complotistes et les contenus nuisibles à la cohésion sociale dont la transformation numérique des médias et de la communication a renforcé la prévalence dans l’espace public médiatique.

En savoir plus : https://www.refram.org/

 Groupshot, Person, Adult, Female, Woman, Clothing, Formal Wear, Suit, Coat, People
Crédit photo : REFRAM

L’OIF contribue activement au programme de la Conférence eWeek 2023 (Genève, 4-8 décembre 2023)

La Conférence des Nations Unies sur le commerce et le développement (CNUCED) a organisé à Genève la semaine eWeek sur le thème « Façonner l’avenir de l’économie numérique », du 4 au 8 décembre 2023, avec plus de 150 sessions axées sur les questions urgentes liées à la numérisation.

L’ eWeek de la CNUCED est une plateforme pour un dialogue constructif et inclusif dont l’objectif est de susciter des idées et des actions contribuant aux efforts mondiaux, en particulier le processus en cours du Pacte numérique mondial des Nations unies visant à promouvoir un avenir numérique ouvert, libre et sûr pour tous, en vue du Sommet de l’avenir des Nations unies de 2024. Cet événement réunit des gouvernements, des chefs d’entreprise, des dirigeants d’organisations internationales, des représentants de la société civile et d’autres parties prenantes.

Les principaux sujets abordés portent sur la gouvernance des plateformes, l’impact de l’intelligence artificielle (IA) sur le développement, les pratiques numériques respectueuses de l’environnement, l’autonomisation des femmes grâce à l’entrepreneuriat numérique et l’accélération de la préparation au numérique dans les pays en développement.

L’Organisation internationale de la Francophonie a contribué activement à l’organisation de 3 sessions au programme de cette manifestation internationale sur les thèmes suivants : « Vers un indice de vulnérabilité numérique », « La découvrabilité des contenus numériques, un impératif pour garantir la diversité culturelle » et « Comment répondre aux besoins de compétences numériques en Afrique francophone ? ». 

Dans cette dernière session, l’OIF, à travers sa Directrice de la Francophonie économique et numérique, Madame Florence BRILLOUIN, a présenté la démarche du projet Phare D-CLIC pour relever le défi de la mise en place de compétences numériques adaptées aux besoins spécifiques des pays francophones africains. S’appuyant sur une cartographie des métiers du numérique dans l’espace francophone, cette initiative de l’Organisation développe des formations répondant aux besoins des Etats africains francophones, telles que des formations professionnalisantes courtes de 3 à 9 mois les plus demandées sur le marché du travail ainsi que favoriser l’entrepreneuriat. Les mécanismes de suivi et d’évaluation de D-CLIC, dont la mise en place de dispositifs d’accompagnement à l’insertion professionnelle et d’un outil de pilotage (cadre logique), ont été aussi évoqués, la finalité étant de mesurer la qualité, l’atteinte des objectifs et l’impact du projet, gage d’efficacité et de redevabilité.

L’OIF forme les experts des missions diplomatiques francophones auprès des Nations unies à New York en prélude aux négociations du Pacte numérique mondial (PNM) 

Dans le cadre de l’élaboration du Pacte numérique mondial, et afin de capitaliser sur la Contribution de la Francophonie au PNM, l’Organisation internationale de la Francophonie a mis en place différentes actions de sensibilisation et d’appropriation de cette Contribution à destination des diplomates en charge de négocier ce futur Pacte. 

Ainsi, l’OIF a organisé une « formation des négociateurs aux enjeux du Pacte numérique mondial » les 4 et 5 décembre à New York, en présentiel. Cette formation de deux demi-journées s’est adressée aux experts francophones en charge du numérique au sein des Missions permanentes des Nations unies à New York. Visant à renforcer leurs capacités de négociation sur le PNM et proposer des clés de compréhension des enjeux au cœur des consultations intergouvernementales sur le PNM afin de maximiser l’implication et la coordination des pays francophones dans ce processus, cette initiative a été développée en partenariat avec l’ISOC – Internet Society et avec la contribution du Bureau de l’Envoyé pour les technologies du Secrétaire général de l’ONU.

Deux évènements parallèles ont également été organisés les 4 et 5 décembre. Le premier est un déjeuner-débat, à l’initiative de la Mission permanente du Canada en partenariat avec les Représentations permanentes de la Tunisie et l’OIF, sur le thème : « La gouvernance de l’Internet et de l’intelligence artificielle ». Cette manifestation a été l’occasion d’un débat passionnant d’éminents experts sur l’avenir de la gouvernance de l’Intelligence Artificielle à l’aune des modèles qui ont fait le succès de la gouvernance de l’Internet. Le lendemain s’est tenu un déjeuner – discussion, à l’initiative de l’AFFIN (Association des Français Fonctionnaires Internationaux de New York), sur le thème « L’élaboration d’une réglementation nationale pour le numérique : enjeux, stratégies et perspectives ». Cela a permis de croiser les regards d’un parlementaire français et d’un Conseiller du Département des Affaires juridiques de l’ONU, sur les implications et les défis en matière de régulation du secteur du numérique au niveau national, régional et mondial. 

Ces actions s’inscrivent dans les objectifs du volet n°3 de l’initiative « D-CLIC, Formez-vous au numérique avec l’OIF » qui comprend le renforcement des capacités des pouvoirs publics, dont les agents publics, autour des enjeux liés à la gouvernance du numérique.

 People, Person, Indoors, Crowd, Adult, Male, Man, Architecture, Building, Chair, Furniture, Face, Head, Audience
Crédit photo : OIF


DW Weekly #140 – 18 December 2023

 Text, Paper, Page

Dear all,

The OEWG wrapped up its sixth substantive session, marking the midway point of this process. COP28 addressed the climate crisis with green digital action and Epic Games secured an antitrust victory against Google. In the AI sphere, global leaders pledged support for responsible AI, balancing innovation and ethics at the 2023 GPAI Summit in New Delhi, while OpenAI partnered with Axel Springer to deliver news through ChatGPT, merging AI with real-time updates. China’s online censors targeted digital pessimism online,  and Ukraine suffered a cyberattack on the country’s largest telecom.

This will be the last weekly digest in 2023 – we will take a short break for the holidays and be back in your inbox on 8 January 2024.

Let’s get started.

Andrijana and the Digital Watch team

// HIGHLIGHT //

OEWG wraps up its sixth substantive session

The sixth substantive session of the UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 was held last week. The OEWG is tasked with the study of existing and potential threats to information security, as well as possible confidence-building measures and capacity building. It should also further develop rules, norms, and principles of responsible behaviour of states, discuss ways of implementing them, and explore the possibility of establishing regular open-ended institutional dialogue under the auspices of the UN. 

Here is a quick snapshot of the discussions. A more detailed follow-up will be published this week: Keep an eye out for it on our dedicated OEWG page.

 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

Threats. The risks and challenges associated with emerging technologies, such as AI, quantum computing, and the internet of things (IoT), were highlighted by several countries. Numerous nations expressed concerns about the increasing frequency and impact of ransomware attacks on various entities, including critical infrastructure, local governments, health institutions, and democratic institutions. Many countries emphasised the importance of international cooperation and information sharing to effectively address cybersecurity challenges. The idea of a global repository of cyber threats, as advanced by Kenya, enjoys much support in this regard.

 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Rules, norms and principles. Many countries mentioned that they have already begun implementing norms at the national level and regional levels through their own national and regional strategies. At the same time, many of them have also signalled that clarifying the norms and providing implementation guidance is necessary. This includes norms implementation checklists, a concept that received widespread acknowledgement and support. There was also interest in delving deeper into discussions surrounding norms related to critical infrastructure (CI) and critical information infrastructure (CII). Yet again, delegations expressed different views on whether new norms are needed: While some states favoured this proposal, other states strongly opposed the creation of new norms and instead called delegates to focus on implementing existing ones.

 Accessories, Bag, Handbag, Scale

International law. There is general agreement that the discussion on the application of international law must be deepened. There’s also a difference of view on whether the ICT domain is so unique as to warrant different treatment. The  elephant in the room is the question of whether a new treaty and new binding norms are needed. Law about state responsibility, the principle of due diligence, international humanitarian law, and international human rights law are also areas without consensus.

 Stencil, Text

Confidence-building measures (CBMs). There’s widespread support for the global Points of Contact (PoC) directory as a valuable CBM. The OEWG members will focus on the implementation and operationalisation of the directory. Many countries prefer an incremental approach to its operationalisation, considering the diversity of regional practices. 

The next steps include: A notification from the Secretariat from UNODA, as the manager of the Global POC directory, will go out very early in the year to all member states, asking them to nominate a point of contact to be included in the PoC directory. An informal online information session on the PoC directory will likely be held sometime in February. The chair noted a need for a space to continue sharing national approaches and national strategies for implementing CBMs. The OEWG will also discuss potential new global CBMs that can be added to the list. 

 Art, Drawing, Doodle

Capacity building. Consensus exists that capacity building is a cross-cutting and urgent issue, enabling countries to identify and address threats while implementing international law and norms for responsible behaviour in cyberspace. Foundational capacities were consistently highlighted as crucial elements in ensuring cybersecurity. This includes legal frameworks, the establishment of dedicated agencies, and mechanisms for incident response, with a special focus on computer emergency response teams (CERTs) and CERT cooperation. However, delegations also stressed the importance of national contexts and how there is no one-size-fits-all answer on building foundational capacities. Eefforts should be tailored to the specific needs, legal landscape and infrastructure of individual countries.

Delegations expressed support for the voluntary cybersecurity capacity-building checklist proposed by Singapore. The checklist aims to guide countries in enhancing their cyber capabilities, fostering international collaboration, and ensuring a comprehensive approach to cybersecurity. Multiple delegations expressed support for the Accra Call for Cyber Resilience Development set forth during the Global Conference on Cyber Capacity Building (GC3B), which seeks to strengthen cyber resilience as a vital enabler for sustainable development.

A mapping exercise in March 2024 will comprehensively survey global cybersecurity capacity building initiatives, aiming to identify gaps and avoid the duplication of efforts. It is anticipated that the results of the exercise will inform the global roundtable on capacity building scheduled for May 2024. The roundtable will serve as an opportunity to involve a range of non-state cybersecurity stakeholders to showcase ongoing initiatives, create partnerships, and facilitate a dynamic exchange of needs and solutions. 

 Accessories, Sunglasses, Text, Handwriting, Glasses

Regular institutional dialogue. The discussions on what the future regular institutional dialogue will look like can be summarised as Programme of Action (PoA) vs OEWG. There have been some novel approaches expressed, though. 

Since the initial proposal of the PoA, there have been several changes. Supporters of the PoA suggest using the review mechanism to identify gaps in existing international law and recognise that such gaps can be filled with new norms. States underlined the action-oriented nature of the PoA, highlighting its capacity building focus. Regarding inclusivity, the PoA should allow multistakeholder participation, especially of the private sector. However, the PoA would be led by states, while stakeholders would be responsible for implementation.  Another novelty includes other initiatives like a PoC directory and threat repository and an UNIDIR implementation survey within the future PoA architecture.

On the other hand, a group of countries submitted a working paper on a permanent OEWG, which they believe should be established right after the end of the current OEWG’s mandate. The permanent OEWG’s focus would be on the development of legally binding rules as elements of a future universal treaty on information security. The working paper suggests several principles, proposing that all decisions of the permanent OEWG should be made by consensus (a crucial difference from a PoA) and stricter rules for stakeholder participation. 

Large UN meeting room has a panel at the front and delegates seated in front of computers. A screen shows the current civil society speaker, Vladimir Radunović.
OEWG in session. Credit: Pavlina Ittelson

The midway point. The OEWG’s mandate spans 2021-2025, with 11 substantive sessions planned during this period. However, the discussions on international security at the UN span 25 years, and some of the disagreements we are seeing today are just as old. Can the OEWG 2021-2025 agree on everything (or anything)? And should it, in order to be deemed successful? We leave you with a quote from the chair himself, Amb. Burhan Gafoor: ‘Because we are midway in this process we also have to think about what is success for the OEWG and for our work. If we define our success in a New York-centric way, then I think we will not have succeeded at all. Our success as a working group will depend on whether we are able to make a difference to the situation on the ground, in capitals in different countries, small countries, developing countries, countries that need help, to deal with the challenge of ICT security.’


// DIGITAL POLICY ROUNDUP (11–18 DECEMBER) //
Screen Shot 2023 12 14 at 20.03.14
COP28 tackles the climate crisis through Green Digital Action
The outcomes of the Green Digital Action track include corporate agreements on reducing greenhouse gas emissions, collaboration on e-waste regulation, and strengthening industry and state collaboration on environmental sustainability standards. Read more.
Screen Shot 2023 12 14 at 20.03.14
COP28 tackles the climate crisis through Green Digital Action
The outcomes of the Green Digital Action track include corporate agreements on reducing greenhouse gas emissions, collaboration on e-waste regulation, and strengthening industry and state collaboration on environmental sustainability standards. Read more.
beautiful hand woman opening play store logo
Epic Games wins antitrust case against Google
A US jury ruled in favour of Epic Games in an antitrust case against the Google Play app store, finding that Google has illegal monopoly power. Read more.
beautiful hand woman opening play store logo
Epic Games wins antitrust case against Google
A US jury ruled in favour of Epic Games in an antitrust case against the Google Play app store, finding that Google has illegal monopoly power. Read more.
circuit board and ai micro processor artificial intelligence of digital human 3d render
Global leaders pledge for responsible AI at the 2023 GPAI Summit in New Delhi
Leaders reaffirmed commitments to responsible AI aligned with democratic values and human rights. Read more.
circuit board and ai micro processor artificial intelligence of digital human 3d render
Global leaders pledge for responsible AI at the 2023 GPAI Summit in New Delhi
Leaders reaffirmed commitments to responsible AI aligned with democratic values and human rights. Read more.
AI prompting banner
OpenAI partners with global news publisher Axel Springer to offer news in ChatGPT
News publisher Axel Springer has partnered with OpenAI, the owner of ChatGPT, to provide AI-generated summaries of news articles. Read more.
AI prompting banner
OpenAI partners with global news publisher Axel Springer to offer news in ChatGPT
News publisher Axel Springer has partnered with OpenAI, the owner of ChatGPT, to provide AI-generated summaries of news articles. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China’s online censors target ‘pessimism’ on digital platforms
Content policy moderation in China aims to root out pessimistic content on digital platforms as criticism of the country’s political economy grows. Read more.
multi exposure abstract creative digital world map hologram chinese flag blue sky background research analytics concept
China’s online censors target ‘pessimism’ on digital platforms
Content policy moderation in China aims to root out pessimistic content on digital platforms as criticism of the country’s political economy grows. Read more.
white square with cyber attack alphabet letters cyber attack concept
Cyberattack cripples Ukraine’s biggest telecom operator
There is no indication that the incident has resulted in the compromise of subscribers’ personal data. Two hacker groups, KillNet and Solntsepyok, claimed responsibility for the attack. Read more.
white square with cyber attack alphabet letters cyber attack concept
Cyberattack cripples Ukraine’s biggest telecom operator
There is no indication that the incident has resulted in the compromise of subscribers’ personal data. Two hacker groups, KillNet and Solntsepyok, claimed responsibility for the attack. Read more.

// READING CORNER //
Human hand extends its index finger to touch the index finger of a robotic hand.

MIT’s group of leaders and scholars, representing various disciplines, has presented a set of policy briefs with the goal of assisting policymakers in effectively managing AI in society.


 Home Decor, Art, Graphics

The OECD Digital Education Outlook for 2023 report assesses the current status of countries and potential future directions in leveraging digital transformation in education. It highlights opportunities, guidelines, and precautions for the effective and fair integration of AI in education. It includes data from a broad range of OECD countries and select partner nations.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

2023 UNCTAD eWeek | Post-event message

 Text

UNCTAD eWeek (4-8 December) was a knowledge fair where thousands of participants from all continents gathered to exchange information, share new ideas, and gain fresh perspectives on AI, data, and e-trade.

During eWeek, Diplo and UNCTAD (with the support of the Omidyar Foundation) made history by providing hybrid reporting of rich knowledge exchange in Geneva and online.

AI and experts summarised and preserved knowledge from eWeek for future policymaking and academic research on digital developments.

The knowledge graph below depicts the breadth and depth of eWeek discussions.

Knowledge graph of UNCTAD eWeek
Campaigns 172

Each line in the graph presents knowledge linkages among topics, arguments and insights in the corpus of 1,440,706 words analogous to the volume of 2,45 War and Peace novel.


You can unpack the above graph and dive deep into eWeek knowledge ecology by consulting reports

from

featuring

Speakers

delivering

Statements

making

Arguments

visualised as ‘knowledge traffic light’ bellow


DW Weekly #139 – 11 December 2023

 Text, Paper, Page

Dear readers,

You’ve noticed we didn’t publish an issue last week, so in this issue we rounded up developments covering the last two weeks.

We’re also changing the format a bit, to include more links towards our Observatory. Do you love it or hate it? Drop us a line at digitalwatch@diplomacy.edu.

Let’s get started.

Andrijana and the Digital Watch team

// HIGHLIGHT //

EU lawmakers reach a deal on AI Act

We have covered the contentious EU discussions over the AI Act. After 36 hours of negotiations over three days (22 of which were consecutive), a provisional agreement was finally reached.

Definition. The EU definition of AI is borrowed from OECD, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

National security exemption. The proposed regulation won’t impede member states’ national security authority and excludes AI systems for military or defence purposes.

Another exemption. It also exempts AI solely for research, innovation, or non-professional use.

General purpose AI systems and foundation models. General AI systems, especially general-purpose AI (GPAI) models, must adhere to transparency requirements, including technical documentation, compliance with the EU copyright law, and detailed summaries about training content. Stringent obligations for high-impact GPAI models with systemic risks include evaluations, risk assessments, adversarial testing, incident reporting, cybersecurity, and energy efficiency considerations. 

Foundation models. We don’t have many details here. What we know now is that the provisional agreement outlines specific transparency obligations for foundation models, large systems proficient in various tasks before they can enter the market. A more rigorous regime is introduced for high-impact foundation models characterised by advanced complexity, capabilities, and performance, addressing potential systemic risks along the value chain.

High-risk use cases. AI systems presenting only limited risk would be subject to very light transparency obligations, for example, disclosing that content was AI-generated. Obligations for AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law) that address issues such as data quality and technical documentation include measures to prove that high-risk systems are compliant. Citizens can launch complaints about high-risk AI systems that affect their rights. 

Banned applications of AI. Some applications of AI will be banned because they carry too high of a risk, i.e. they pose a potential threat to citizens’ rights and democracy. These include

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • emotion recognition in the workplace and educational institutions
  • social scoring based on social behaviour or personal characteristics
  • AI systems that manipulate human behaviour to circumvent their free will
  • AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation

Law enforcement exceptions. Negotiators reached an agreement on the use of remote biometric identification systems (RBI) in publicly accessible spaces for law enforcement, allowing post-remote RBI for targeted searches of persons convicted or suspected of serious crimes and real-time RBI with strict conditions for purposes like

  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime)

Additionally, changes were made to the commission proposal to accommodate law enforcement’s use of AI, introducing an emergency procedure for deploying high-risk AI tools and ensuring fundamental rights protection, with specific objectives outlined for the use of real-time remote biometric identification systems in public spaces for law enforcement purposes.

Governance. A new AI office within the commission will be advised by a scientific panel on evaluating foundation models and monitoring safety risks and will oversee advanced AI models and enforce common rules across EU member states. The AI Board, representing member states, will coordinate and advise the commission, involving member states in the implementation of regulation, while an advisory forum for stakeholders will offer technical expertise to the board.

Measures to support innovation. Regulatory sandboxes and real-world testing are promoted to enable businesses, particularly SMEs, to develop AI solutions without undue pressure from industry giants and support innovation.

Penalties. Sanctions for non-compliance range from EUR 35 million or 7% of the company’s global annual turnover for violations of the banned AI applications, EUR 15 million or 3% for breaches of the act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. More proportionate caps on administrative fines for SMEs and start-ups are in the agreement.

Group photo of EU lawmakers in a meeting room.
EU lawmakers during the last negotiation meeting. Credit: Euractiv.

Why it matters. The EU can now say it has drafted the very first Western AI law. The Spanish presidency has scored a diplomatic win, and, well, it’s good PR for everyone involved. Some countries are reportedly already reaching out to the EU for assistance in their future processes. 

The draft legislation still needs to go through a few last steps for final endorsement, but the political agreement means its key elements have been approved – at least in theory. There’s quite a lot of technical work ahead, and the act does have to go through the EU Council, where any unsatisfied countries could still throw a wrench into the works. The act will go into force two years after its adoption, which will likely be in 2026. The biggest question is: Will technology move so fast that by 2026, the AI Act will no longer be revolutionary or even effective? Will we see another development the likes of ChatGPT, that would render this regulation essentially obsolete?


// DIGITAL POLICY ROUNDUP (27 NOVEMBER–13 DECEMBER) //
legal advice and counseling for digital technologies laws business and intellectual property
Microsoft’s partnership with OpenAI faces antitrust scrutiny in the USA and the UK
The US Federal Trade Commission (FTC) is conducting preliminary examinations of Microsoft’s investment in OpenAI to determine if it violates antitrust laws. Read more.
legal advice and counseling for digital technologies laws business and intellectual property
Microsoft’s partnership with OpenAI faces antitrust scrutiny in the USA and the UK
The US Federal Trade Commission (FTC) is conducting preliminary examinations of Microsoft’s investment in OpenAI to determine if it violates antitrust laws. Read more.
robotic hand wielding a dustpan and brush meticulously sweeping up one dollar bills
ECB study warns, rapid AI adoption could impact wages, not jobs
The adoption of AI could impact wages, but would not be a concern for job security, according to research by the European Central Bank (ECB). Read more.
robotic hand wielding a dustpan and brush meticulously sweeping up one dollar bills
ECB study warns, rapid AI adoption could impact wages, not jobs
The adoption of AI could impact wages, but would not be a concern for job security, according to research by the European Central Bank (ECB). Read more.
european union eu flag
EU Council adopts Data Act
The act sets principles of data access, portability, and sharing for users of IoT products. Read more.
european union eu flag
EU Council adopts Data Act
The act sets principles of data access, portability, and sharing for users of IoT products. Read more.
many flags of different countries
ITU report: uneven progress in bridging the global digital divide
The International Telecommunication Union’s (ITU) Facts and Figures 2023 report reveals that global internet connectivity is progressing steadily but unevenly, highlighting the disparities of the digital divide. Read more.
many flags of different countries
ITU report: uneven progress in bridging the global digital divide
The International Telecommunication Union’s (ITU) Facts and Figures 2023 report reveals that global internet connectivity is progressing steadily but unevenly, highlighting the disparities of the digital divide. Read more.
g20logo India 2023
India launches global repository for digital public infrastructures post G20
This launch comes after all G20 member states expressed their support for digital public infrastructure policy initiatives at the 2023 G20 summit. Read more.
g20logo India 2023
India launches global repository for digital public infrastructures post G20
This launch comes after all G20 member states expressed their support for digital public infrastructure policy initiatives at the 2023 G20 summit. Read more.
eu flags in front of european commission
European Commission launches Chips Joint Undertaking under the European Chips Act
The European Commission has enacted the Chips JU plan and formed the European Semiconductor Board to advise on implementing the European Chips Act and fostering international collaboration. Read more.
eu flags in front of european commission
European Commission launches Chips Joint Undertaking under the European Chips Act
The European Commission has enacted the Chips JU plan and formed the European Semiconductor Board to advise on implementing the European Chips Act and fostering international collaboration. Read more.
meta logo metaverse product setting podium abstract minimalistic placement abstract background
Meta’s Oversight Board to review handling of violent content in Israel-Hamas conflict cases
Meta’s Oversight Board will focus on a video depicting the aftermath of a Gaza hospital explosion and another featuring a kidnapped woman. Read more.
meta logo metaverse product setting podium abstract minimalistic placement abstract background
Meta’s Oversight Board to review handling of violent content in Israel-Hamas conflict cases
Meta’s Oversight Board will focus on a video depicting the aftermath of a Gaza hospital explosion and another featuring a kidnapped woman. Read more.

// IN CASE YOU MISSED IT //

UNCTAD eWeek 2023 reports

Last week, we had the honour of being the official reporting partner of UNCTAD for the 2023 edition of eWeek. We reported from 127 sessions, spanning 7 days, 2 hours, 17 minutes, and 56 seconds, with a whopping 1,440,706 words. Visit the DW’s dedicated UNCTAD page to read the session reports. You can also register to receive a personalised AI report from the event!


Call for Applications: C4DT Digital Trust Policy Fellowship

The Center for Digital Trust (C4DT) is launching the second round of its Digital Trust Policy Fellowship Program, seeking recent MSc. or PhD graduates, global thinkers, and tech enthusiasts with backgrounds in computer science or engineering. The program looks for individuals with innovative minds, ambitious self-starters ready to tackle challenges in privacy, cybersecurity, AI, and machine learning, and aspiring policy writers with excellent analytical and communication skills. The deadline for applications is 31 January 2024.


// THE WEEK AHEAD //

20 November–15 December. The ITU World Radiocommunication Conference, which aims to review and revise the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits, will conclude on 15 December. 

11–12 December. The 12th edition of the Global Blockchain Congress in Dubai, UAE, under the theme ‘Will the Next Bull Market Be Different?’

11–15 December. The second UN Open-Ended Working Group (OEWG) on developments in the field of ICTs in the context of international security will hold its sixth substantive session 11–15 December. The OEWG is tasked with studying existing and potential threats to information security, possible confidence-building measures, and capacity building. It should also further develop rules, norms, and principles of responsible behaviour of states, discuss ways of implementing them, and explore the possibility of establishing regular open-ended institutional dialogue under the auspices of the UN.

12–14 December. The Global Partnership on AI Summit 2023 will bring together experts to foster international cooperation on various AI issues. GPAI working groups will also showcase their work around responsible AI, data governance, the future of work, and innovation and commercialisation.

12–14 December. Jointly organised by ITU and the European Commission with the co-organisational support of the Accessible Europe Resource Centre, Accessible Europe: ICT 4 All – 2023 aims to explore the areas where accessibility gaps persist and identify what best practices can be replicated for broader impact.

14 December. The launch of the UN Institute for Disarmament Research (UNIDIR) report on ‘International Security in 2045: Exploring Futures for Peace, Security and Disarmament’ will be held in a hybrid format on 14 December 2023 at the Palais des Nations in Geneva, Switzerland.

13–15 December. The Council of Europe’s Octopus Conference 2023 will focus on securing and sharing electronic evidence and capacity building on cybercrime and electronic evidence, specifically the impact the Cybercrime Programme Office made during the last ten years and the next steps.


// READING CORNER //
 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

ChatGPT: A year in review
ChatGPT recently turned one – delve into the trends it brought forward, which have shaped both industries and regulatory frameworks.


 Sphere, Cap, Clothing, Hat, Astronomy, Outer Space, Home Decor

Geneva Manual on Responsible Behaviour in Cyberspace

The manual, which focuses on the roles and responsibilities of non-state stakeholders in implementing two UN cyber norms related to supply chain security and responsible reporting of ICT vulnerabilities, was launched by the Geneva Dialogue on Responsible Behaviour in Cyberspace last week.


Decorative image of cover page

Digital Watch Monthly December issue

In the December issue of the Digital Watch Monthly, we describe the four seasons of AI, summarise EU lawmakers’ negotiations on the AI Act, examine what Q* and Gemini mean for AGI, and delve into the delicate balance between combating online hate and preserving freedom of speech.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 85 – December 2023

 Page, Text, Advertisement, Poster, Person, Face, Head

Snapshot: What’s making waves in digital policy?

AI governance

Google and Anthropic have announced an expanded partnership, encompassing joint efforts on AI safety standards, committing to the highest standards of AI security, and using TPU chips for AI inference.

Google has unveiled ‘The AI Opportunity Agenda,’ offering policy guidelines for policymakers, companies, and civil societies to collaborate in embracing AI and capitalising on its benefits.

The OECD launched The AI Incidents Monitor, which offers comprehensive policy analysis and data on AI incidents, shedding light on AI’s impacts to help shape informed AI policies. US President Joe Biden and Chinese President Xi Jinping, on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Week, agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through USA-China government talks.

The Italian Data Protection Authority (DPA) initiated a fact-finding inquiry to assess whether online platforms have implemented sufficient measures to stop AI platforms from scraping personal data for training AI algorithms. 

Switzerland’s Federal Council has tasked the Department of the Environment, Transport, Energy, and Communications (DETEC) with providing an overview of potential regulatory approaches for AI by the end of 2024. The council aims to use the analysis as a foundation for an AI regulatory proposal in 2025.

Technologies

Yangtze Memory Technologies Co (YMTC), China’s largest memory chip manufacturer, filed a lawsuit against Micron Technology and its subsidiary for violating eight patents. Under the EU-India Trade and Technology Council (TTC) framework, the EU and India signed a Memorandum of Understanding on working arrangements in the semiconductor ecosystem, its supply chain, and innovation. Joby Aviation and Volocopter air taxi manufacturers showcased their electric aircraft in New York. Amazon introduced Q, an AI-driven chatbot tailored for its Amazon Web Services, Inc. (AWS) customers, serving as a versatile solution catering to business intelligence and programming needs.

Security

The UK, the USA, and 16 other partners have released the first global guidelines to enhance cybersecurity throughout the life cycle of an AI system. The guidelines span four key areas within the life cycle of the development of an AI system: secure design, secure development, secure deployment, and secure operation and maintenance.

The EU Parliament and the EU Council reached a political agreement on the Cyber Resilience Act. The agreement will now be subject to formal approval by the parliament and the council.

Infrastructure

The EU’s Gigabit Infrastructure Act (GIA) is undergoing significant alteration as the ‘tacit approval principle,’ designed to expedite the deployment of broadband networks, has been excluded from the latest compromise text circulated by the Spanish presidency of the EU Council. ICANN launched its Registration Data Request Service (RDRS) to simplify requests for access to nonpublic registration data related to generic top-level domains (gTLDs). 

The International Telecommunication Union (ITU) has adopted ITU-R Resolution 65, which aims to guide the development of a 6G standard. This resolution enables studies on the compatibility of current regulations with potential 6th-generation international mobile telecommunications (IMT) radio interface technologies for 2030 and beyond. 

The Indian government has launched its Global Digital Public Infrastructure Repository and created the Social Impact Fund to advance digital public infrastructure in the Global South as part of its G20 initiatives.

The EU Council adopted the Data Act, setting principles of data access, portability, and sharing for users of IoT products. OpenAI has initiated the Copyright Shield, a program specifically covering legal expenses for its business customers who face copyright infringement claims stemming from using OpenAI’s AI technology.

Internet economy

Apple, TikTok, and Meta appealed against their gatekeeper classification under the EU Digital Markets Act (DMA), which aims to enable user mobility between rival services like social media platforms and web browsers. Conversely, Microsoft and Google have opted not to contest the gatekeeper label. The US Treasury reached a record $4.2 billion settlement with Binance – the world’s largest virtual currency exchange, for violating anti-money laundering and sanctions laws, mandating a five-year monitoring period and rigorous compliance measures. Australia’s regulator called for a new competition law for digital platforms due to their growing influence. 

Digital rights

The Court of Justice of the EU (CJEU) ruled that data subjects have the right to appeal the decision of the national supervisory authority regarding the processing of their personal data.

Content policy

Nepal decided to ban TikTok, citing the disruption of social harmony caused by the misuse of the popular video app. YouTube introduced a new policy that requires creators to disclose the use of Generative AI. OpenAI and Anthropic have joined the Christchurch Call to Action, a project started by French President Emmanuel Macron and then New Zealand Prime Minister Jacinda Ardern to suppress terrorist content. X (formerly Twiter) is on the EU Commission’s radar for having significantly fewer content moderators than its rivals.

Development

ITU’s Facts and Figures 2023 report reveals uneven progress in global internet connectivity, which exacerbates the disparities of the digital divide, particularly in low-income countries. Switzerland announced plans for a new state-run digital identity system, slated for launch in 2026, after voters rejected a private initiative in 2021 due to personal data protection concerns. Indonesia’s Ministry of Communication and Information introduced a new policy on digital identity, which will later require all citizens to have a digital ID.

THE TALK OF THE TOWN – GENEVA

The World Economic Forum (WEF) held its Annual Meeting on Cybersecurity 2023 on 13–15 November, assembling over 150 leading cybersecurity experts. Based on WEF’s Global Security Outlook 2023 report released in January 2023, the annual meeting provided a space for experts to address growing cyber risks with strategic, systemic approaches and multistakeholder collaborations.

The 12th UN Forum on Business and Human Rights took place from 27 to 29 November, focusing on the actual changes that have been made  by states and businesses to implement the UN Guiding Principles on Business and Human Rights (UNGPs) standards. Among the discussed topics was the improvement in disability rights implementation via advancements in assistive technologies, AI and digitalisation, and other care and support systems. 

Held in conjunction with the 12th UN Forum on Business and Human Rights, the UN B-Tech Generative Summit on 30 November explored the undertaking of due diligence in human rights when putting AI into practice. The full-day summit presented the B-Tech Project’s papers on human rights and generative AI and provided a platform for all stakeholders to discuss the practical uses of the UN Guiding Principles on Business and Human Rights (UNGP) and other human-rights-based approaches in analysing the impacts of generative AI.


The four seasons of AI

ChatGPT, a revolutionary creation by OpenAI launched on 30 November 2022, has not only captivated the tech world but also shaped the narrative around AI. As ChatGPT marks its first anniversary, it prompts a collective step back to reflect on the journey so far and consider what lies ahead. 

A symbolic journey through the seasons has been a compelling backdrop to AI’s trajectory since last November. The winter of excitement saw rapid user adoption, surpassing even social media giants with its pace. Within 64 days, ChatGPT amassed an astounding 100 million users, a feat that Instagram, for instance, took 75 days to achieve. The sudden surge in interest in generative AI has taken major tech companies by surprise. In addition to ChatGPT, several other notable generative AI models, such as Midjourney, Stable Diffusion, and Google’s Bard, have been released.

The subsequent spring of metaphors ushered in a wave of imaginative comparisons and discussions on AI governance. Anthropomorphic descriptions and doomsday scenarios emerged, reflecting society’s attempts to grapple with the implications of advanced AI.

As ChatGPT entered its contemplative summer of reflection, a period of introspection ensued. Drawing inspiration from ancient philosophies and cultural contexts, the discourse broadened beyond mere technological advancements. The exploration of wisdom from Ancient Greece to Confucius, India, and the Ubuntu concept in Africa sought answers to the complex challenges posed by AI, extending beyond simple technological solutions.

Now, in the autumn of clarity, the initial hype has subsided, making room for precise policy formulations. AI has secured its place on the agendas of national parliaments and international organisations. In policy documents from various groups like the G7, G20, G77, and the UN, the balance between opportunities and risks has shifted towards a greater focus on risks. The long-term existential threats of AI have taken centre stage in conferences like the London AI Summit, with governance proposals drawing inspiration from entities like the International Atomic Agency (IAAA), CERN, and the International Panel on Climate Change (IPCC). 

What lies ahead? We should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed.

In managing AI risks, a comprehensive understanding of three categories – immediate knowns, looming unknowns, and long-term unknowns – is crucial for shaping effective regulations. While short-term risks like job loss and data protection are familiar and addressable with existing tools, mid-term risks involve potential monopolies controlling AI knowledge, demanding attention to avoid dystopian scenarios. Long-term risks encompassing existential threats dominate public discourse and policymaking, as seen in the Bletchley Declaration. Navigating the AI governance debate requires transparently addressing risks and prioritising decisions based on societal responses.

Regarding the governance of AI aspects, current discussions revolve around computation, data, algorithms, and applications. Computation aspects involve the race for powerful hardware, with geopolitical implications between the USA and China. The data, often called the oil of AI, demands increased transparency regarding its usage. Algorithmic governance, which is focused on long-term risks, centres on the relevance of weights in AI models. At the apps and tools level, the current shift from algorithmic to application-focused regulations may significantly impact technological progress. Debates often overlook data and app governance, areas detailed in regulation but not aligned with tech companies’ interests.

 Light, Lightbulb

This text is inspired by Dr Jovan Kurbalija’s Recycling Ideas blog series. It’s a collection of concepts, traditions, and thoughts aimed at constructing a social contract suitable for the AI era.


EU lawmakers warring over the bloc’s AI Act

After more than 22 hours of the initial trilogue negotiations in the EU on 6 and 7 December, encompassing an agenda of 23 items, agreement on the AI Act remains elusive. Here’s what reports point to. 

Foundation models. The negotiations hit a significant snag when France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models (base models for developers). The tiered approach would mean categorising AI into different risk bands, with more or less regulation depending on the risk level. What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself, because they want to ensure that AI innovation in the EU is not stifled. They proposed ‘mandatory self-regulation through codes of conduct’ for foundation models. European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. 

According to a compromise document seen by Euractiv, the tiered approach was retained in the text of the act. However, the legislation would not apply to general-purpose AI (GPAI) systems offered under free and open-source licenses. This exemption can be nullified if the open-source model is put into commercial use. At the same time, lawmakers agreed that the codes of conduct would serve as supplementary guidelines until technical standards are harmonised.

According to the preliminary agreement, any model that was trained using computing power greater than 10^25 floating point operations (FLOPs) will be automatically categorised as having systemic risks.

These models would face extensive obligations, including evaluation, risk assessment, cybersecurity, and energy consumption reporting. 

An EU AI office will be established within the commission to enforce foundational model rules, with national authorities overseeing AI systems through the European Artificial Intelligence Board (EAIB) for consistent application of the law. An advisory forum will gather feedback from stakeholders. A scientific panel of independent experts will advise on enforcement, identify systemic risks, and contribute to the classification of AI models.

Contentious issues. While approximately ten issues remain unresolved on the agenda, the primary obstacles revolve around prohibitions, remote biometric identification, and the national security exemption.

Prohibitions. So far, lawmakers have tentatively agreed on prohibiting manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images. At the same time, the European Parliament has proposed a much longer list of prohibited practices and is facing a strong pushback from the council.

Remote biometric identification. On the issue of remote biometric identification, including facial recognition in public spaces, members of the European Parliament (MEPs) are pushing for a blanket ban on biometric categorisation systems based on sensitive personal traits, including race, political opinions, and religious beliefs. At the same time, member states are pushing for exemptions to use biometric surveillance when there is a threat to national security. 

National security exemption.  France, leading the EU countries, advocates for a broad national security exemption in AI regulations, emphasising member states’ discretion in military, defence, and national security issues. However, this will likely face resistance from progressive lawmakers, who will likely advocate for an outright ban.

What now? If the EU doesn’t pass the EU Act in 2023, it might lose its chance to establish the gold standard of AI rules. Spain, in particular, is eager to achieve this diplomatic win under their presidency. The Spanish presidency offered MEPs a package deal close to the council’s position, and despite tremendous pressure, the centre-to-left MEPs did not accept it. Negotiations are still ongoing, though. Now we wait.

A man interacts with artificial intelligence to optimize and automate computing.


Higher stakes in the race for AGI? 

The buzz around OpenAI’s November saga has been nothing short of gripping, and we’ve been right in the thick of it, following every twist and turn. 

In summary, OpenAI CEO Sam Altman was ousted from the company because he ‘was not consistently candid in his communications’ with the board. Most of OpenAI’s workforce, approximately 700 out of 750 employees, expressed their intention to resign and join Altman at Microsoft, prompting his reinstatement as CEO. Additionally, OpenAI’s board changed some of its members.

Reports (and speculation) of Q* swiftly broke through. Reuters reported that Altman was dismissed partly because of Q*, an AI project allegedly so powerful that it could threaten humanity. 

Q* can supposedly solve certain math problems. Although its mathematical prowess is on the level of grade-school students (the first 6 or 8 grades), this could be a potential breakthrough in artificial general intelligence (AGI), as it suggests a higher reasoning capacity. OpenAI sees AGI as AI that aims to surpass human capabilities in economically valuable tasks.

Upon his return as CEO, Altman’s comment about Q* was: ‘No particular comment on that unfortunate leak.’  

The news has caused quite a stir, with many wondering what exactly Q* is, if it even exists. Some savvy observers think Q* might be tied to a project from OpenAI in May, bragging about ‘process supervision’ – a technique that trains AI models to crack problems step-by-step.

Some theorise the Q* project might blend Q-learning (i.e. a type of reinforcement learning where a model iteratively learns and improves over time by being rewarded for taking the correct action) with an algorithm that computers can use to figure out how to get somewhere between two places quickly (A* search).

Sidenote: how do you reward AI? By using the reward function, which gives numerical values to an AI agent as a reward or punishment for its actions, Diplo’s AI team explained. For example, if you want AI agent to learn how to get from point A to point B, you can give it +1 for each step in the right direction, -1 for each step in the wrong direction, and +10 for reaching point B. Since the AI agent is trying to maximise the value of the reward function, it will learn to take steps in the right direction.

Others posited that the name Q* might reference the Q* search algorithm, which was developed to control deductive searches in an experimental system. 

Google joins the race. The beginning of December saw the launch of Google’s Gemini, an AI model that, according to Google, has outperformed human experts on massive multitask language understanding, a measurement designed to measure AI’s knowledge of math, history, law, and ethics. This model reportedly can outperform GPT-4 in grade school math. However, Google has declined to comment on Gemini’s parameter counts.

Is this all really about AGI? Well, it’s hard to tell. On the one hand, AI surpassing human capabilities sounds like a dystopia (why does no one ever think it might be a utopia?) is ahead. On the other hand, experts say that even if an AI could solve math equations, it wouldn’t necessarily translate to broader AGI breakthroughs.

What are all these speculations really about? Transparency – and not only at OpenAI and Google. We need to understand who (or what) will shape our future. Are we the leading actors or just audience members waiting to see what happens next?

 Art, Doodle, Drawing, Person, Animal, Bird, Penguin, Head, Face, Canine, Dog, Mammal, Pet

Balancing online speech: Combating hate while preserving freedom

The ongoing battle about preventing and combating online hate speech while ensuring that freedom of expression is protected has had the EU Agency for Fundamental Rights (FRA) calling for ‘appropriate and accurate content moderation’. 

FRA has published a report on the challenges in detecting online hate speech against people of African descent, Jewish people, and Roma people and others on digital platforms, including Telegram, X (formerly known as Twitter), Reddit, and YouTube. Data were collected from Bulgaria, Germany, Italy, and Sweden to provide a comparative analysis based on their current national policies. FRA called regulators and digital platforms to ensure a safer space for people of African descent, Jews, and Roma because it was found that they experience very high levels of hate speech and cyber harassment. Additionally, FRA drew attention to effective content moderation regulation for women as there are higher levels of incitement to violence against them compared to other groups. 

xBaeopxzCTcmDi7oehQSM73Br

Is the DSA enough to ensure content moderation in the EU? While the Digital Security Act (DSA) is considered a big step in moderating online hate speech, FRA claims its effect is yet to be seen. According to FRA, clarification is needed about what is regarded as hate speech, including training for law enforcement, content moderators, and flaggers about legal thresholds for the identification of hate speech. This training should also ensure that platforms do not over-remove content. 

UNESCO’s guidelines. UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling them a ‘major threat to stability and social cohesion’. In response, UNESCO published guidelines for the governance of digital platforms to combat online disinformation and hate speech while protecting freedom of expression. The guidelines include establishing independent public regulators in countries worldwide, ensuring linguistically diverse moderators on digital platforms, prioritising transparency in media financing, and promoting critical thinking.

The importance of civil society. Since the Israeli-Palestinian war began, posts about Palestine and content removals reached an ‘unprecedented scale’ said Jillian York, the director for international freedom of expression at the Electronic Freedom Foundation (EFF). Thus, several Palestinian human rights advocacy groups initiated the ‘Meta: Let Palestine Speak’ petition calling on the tech giant to address the unfair removal of  Palestinian content.

And, of course, AI. As found in FRA’s report, human-based content assessment often uses biassed and discriminatory parameters. This, however, does not mean that AI could be prevented from doing this, as seen in Meta’s auto-translation, which applied the term ‘terrorist’ to Palestinian users who had an Arabic phrase in their bios, for which Meta publicly apologised in October 2023. 


Launch of the Geneva Manual on Responsible Behaviour in Cyberspace

The recently launched Geneva Manual focuses on the roles and responsibilities of non-state stakeholders in implementing two UN cyber norms related to supply chain security and responsible reporting of ICT vulnerabilities. 

The manual was drafted by the Geneva Dialogue on Responsible Behaviour in Cyberspace, an initiative established by the Swiss Federal Department of Foreign Affairs and led by DiploFoundation with the support of the Republic and state of Geneva, the Center for Digital Trust (C4DT) at the Swiss Federal Institute of Technology in Lausanne (EPFL), Swisscom, and UBS.

The Geneva Dialogue plans to expand the manual by discussing the implementation of additional norms in the coming years.

The manual is a living document, open for engagement and enrichment. Visit genevadialogue.ch to contribute your thoughts, ideas, and suggestions, as we chart a course toward a more secure and stable cyberspace. 

Cartoon schematic shows a flow from a contentions oral meeting on cyber norms to a more detailed research analysis with exclamations changing to realisation and agreement, to a summary schematic superimposed over a turtle, alongside a report on cyber norms and a book with a bookmark labelled Geneva manual, ending in a drawing of a turtle with a world globe on its back, with the word ‘Secure!’