Home | Newsletters & Shorts | Digital Watch newsletter – Issue 82 – September 2023

Digital Watch newsletter – Issue 82 – September 2023

Download your copy

EN
Cover page of the DigWatch newsletter for September 2023, issue 82, with the heading: AI and copyright: USA, UK eyeing new rules' and the caricature of a human hand and an AI hand working on the same cartoon of Zarya of the Dawn

Snapshot: What’s making waves in digital policy?

Geopolitics

The US government announced plans (see: executive order) to prohibit or restrict US investments in China across three industry sectors – semiconductors, quantum technologies, and (specific) AI systems – while Chinese regulators failed to approve Intel’s plans to acquire Israeli chipmaker Tower Semiconductor. New York City has implemented a TikTok ban on government-owned devices. 

AI governance

Four companies developing AI – Anthropic, Google, Microsoft, and OpenAI – launched a new industry body to focus on the safe and responsible development of frontier AI models. Meanwhile, dozens of large companies rushed to block GPTBot, OpenAI’s new web crawler that scrapes data to feed to its ChatGPT. 

UK members of parliament are urging the government to introduce new AI rules by the end of the year or risk being left behind. BRICS countries – Brazil, Russia, India, China, and South Africa – established an AI study group to research AI governance frameworks and standards, and help make AI technologies ‘more secure, reliable, controllable, and equitable’. Canada’s draft code of practice for regulating generative AI is available for public input.

Data protection authorities expressed concern about tech companies’ data (or web) scraping practices and the implications for personal data. Just because information is publicly available on the internet does not mean that privacy protections no longer apply,the statement said.

Security

The sixth round of UN negotiations on a new cybercrime treaty concluded in New York without making significant headway.

The Qakbot malware, which infected over 700,000 devices, was disrupted by a law enforcement operation involving the USA, France, Germany, the Netherlands, the UK, Romania, and Latvia. Meta took down thousands of accounts and pages linked to Spamouflage, which it described as the world’s largest known covert influence operation. The NCC Group security company reported a record high number of ransomware attacks in July, which it attributed to the exploitation of a vulnerability in MOVEit, a file transfer software, by a hacker group known as CLOP or Cl0p

In the UK, videos shared on TikTok and Snapchat encouraging people to steal from shops led to heavy commotion and several arrests in Oxford Street, London. 

Security and standards authorities in the US are urging organisations, especially those supporting critical infrastructures, to start thinking about migrating to post-quantum cryptographic standards in anticipation of quantum-powered cyberattacks.

Infrastructure

It will still take time for Africa’s internet connection to be fully restored after an underwater landslide in Congo Canyon damaged two major submarine cables that run along the western coast of Africa. 

Internet economy

On 25 August, stricter rules for very large online platforms and search engines came into effect as part of the EU’s new Digital Services Act. The European Commission launched formal proceedings against Microsoft for bundling the communication software Teams with its Office 365. A few weeks later, Microsoft announced it would unbundle its software for European and Swiss customers as of October. The French competition authority is investigating Apple for potential self-preferencing treatment. Advertisers say the company imposed its App Tracking Transparency (ATT) policy upon them, but exempted itself from the same rules.

Microsoft agreed to transfer the licensing rights for the cloud streaming of Activision Blizzard games to Ubisoft, in order to win approval from the UK to acquire Activision. The European Commission will need to reevaluate its earlier approval. 

Digital rights

Sam Altman’s relaunched Worldcoin project, featuring a cryptocurrency and an identity network, captured the attention of privacy regulators over possible irregularities linked to its biometric data collection methods. Zoom’s revised Terms of Service sparked controversy due to the company’s intention to use customer data for machine learning and AI. It later clarified its position.

The Norwegian Data Protection Authority imposed daily fines of 1 million kroner (USD98,500) on Meta over non-compliance with a ban on behaviour-based marketing carried out by Facebook and Instagram. OpenAI is being investigated in Poland: A researcher claimed the company processed his data ‘unlawfully, unfairly, and in a non-transparent manner’.

Content policy

China’s Cyberspace Administration released draft guidelines for the introduction of screen time software to curb the problem of smartphone addiction among minors. 

Canada criticised Meta for banning domestic news from its platforms as wildfires ravaged parts of the country. It wants Google and Meta to contribute a minimum of CAD230 million (EUR157 million) to support local media.

Development

Digital identity projects picked up around the world. Australia is planning new rules for its federally-backed digital ID by next year. The US government wants to work with the private sector to develop mobile phone standards for digital identification – similar to what the Philippines plans to do. Nigeria is getting help from the World Bank to implement nationwide digital IDs.

THE TALK OF THE TOWN – GENEVA

This year’s World Telecommunication/ICT Indicators Symposium (WTIS) (3–4 July) tackled ways of measuring data to advance universal internet connectivity and discussed the outcomes of two expert groups that reaffirmed the importance of internationally comparable data in monitoring ICT-related developments. ITU, in collaboration with the EU, launched the Dashboard for Universal and Meaningful Connectivity for tracking country progress and performance.

At the AI for Good Global Summit (6–7 July), over 280 projects showcased the capabilities of AI in advancing the SDGs and addressing the pressing needs of the world, amid discussions on AI policy and regulations, and future AI developments. 


The annual ITU Council gathered its 48 member countries to discuss ITU’s strategic plans. At this year’s council (11–21 July) Secretary-General Doreen Bogdan-Martin highlighted two primary goals for ITU: universal connectivity and sustainable digital transformation. The council noted that digital issues have become more prominent on global agendas, such as the upcoming 2023 SDG Summit and the 2024 Summit of the Future.


AI and copyright: USA, UK eyeing new measures

If you’re using someone else’s work, you need permission. This sums up how the world has primarily approached the rights of authors – until now.

The arrival of generative AI models, like ChatGPT, has wreaked havoc on copyright rules. For starters, the models powering generative AI are trained on whatever data they can lay their hands on, regardless of whether they are copyrighted content or not. Disgruntled authors and artists want this practice to stop. For them, the notion of fair use doesn’t cut it, especially if companies are making money off the system. 

A humanoid robot sitting at the desk and sketching with a pen in hand.

But there’s another issue: Users co-authoring new content with the help of AI are seeking copyright protection for their works. Since copyright attaches to human authorship, IP regulators are in a quandary: Which parts should be copyrighted, and where should the line be drawn?

Faced with these issues, IP offices in the UK and the USA have initiated consultation processes to help inform their next steps. Both have acknowledged that new rules might be needed.

What the UK is doing. In June, the UK’s IP office formed a working group to develop a voluntary code of practice. Microsoft, DeepMind, and Stability AI are among the working group members,together with representatives from art and research groups.

The government aims for the group to develop a code of practice to ‘… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work’. The government made it quite clear that if agreement is not reached or the code not adopted, it may legislate.

Copyright issues are also among the top challenges that the UK government is dealing with in its quest to tackle AI governance. 

What the USA is doing. The US Copyright Office has issued a call for public comments to inform it on possible regulatory measures or new rules that are needed to regulate these practices. This is typically the last step before new measures or rules are proposed, so we might be looking at proposals for new legislation before the year’s end. 

The issues that the copyright office is looking at are quite defined. First, it wants to understand how AI models are using, and should use, copyrighted data in their training processes. Second, it wants to hear proposals on how AI-generated material could be copyrighted. Third, it wants to determine how copyright liability would work in the context of AI-generated content. Fourth, it is seeking comments on the potential violation of publication rights, that is, the rights of individuals to control the commercial use of their likeness or personal information.

Elephants in the room. And yet, there’s little to suggest that these consultations will tackle – let alone solve – how to reverse the damage that’s already been done. Copyrighted content is now part of the enormous hoard of data on which the models were trained, and part of the content that’s being generated by AI chatbots. In addition, if human intervention is required to trigger copyright protection (as the US’s latest guidance states, for instance), where does this leave AI outputs that incorporate copyrighted content to a problematic extent? 

Interim solutions. In the meantime, companies behind large language models (the models that train generative AI tools) might need to do more to ensure copyrighted content isn’t being used. One solution could be to implement automated mechanisms that detect copyrighted work in material slated to be used in training or generation processes, and drop that part of the data before the training process begins. For instance, web crawlers (which websites can limit or disable) might be prevented from scraping copyrighted content by employing effective coding. 

Another solution – probably more attractive for companies – is to find new ways, such as licensing, to monetise the process in a way that both authors and the AI sector can benefit. Now that would be a win-win.

Caricature of a human hand and an AI hand working on the same cartoon of Zarya of the Dawn

Who is Zarya of the Dawn, the character gracing our front cover?

Zarya is the protagonist of a short comic book written by Kris Kashtanova and illustrated by Midjourney, an AI-based image generator. In September 2022, Kashtanova sought copyright protection for the comic from the US Copyright Office without disclosing that Midjourney was involved in creating the illustrations. The copyright was initially granted, but later the copyright office revoked the artwork’s protection. The copyright office explained that only human-authored works can be protected. In this case, the book’s layout, text, and storyline were eligible for protection, but the images themselves weren’t.

This case sets an important precedent for how copyright law applies to works generated by AI. The copyright office’s decision confirms that humans must be in control of the output, even when a computer is involved in the creative process. By comparison, ‘rather than a tool that Ms Kashtanova controlled and guided to reach her desired image, Midjourney generates images in an unpredictable way. Accordingly, Midjourney users are not the “authors” for copyright purposes of the images the technology generates’.


The Brussels effect: DSA and Trans-Atlantic Data Privacy Framework kick in

When a city becomes synonymous with its rule-making prowess, its lawmakers know they must be doing something right. Such is the worldwide fame of Brussels, home to the EU’s institutions.

In the past few weeks, two new sets of rules kicked in, which, together with the GDPR, are setting new standards for upholding users’ rights and market regulations. Both are likely to shape practices and measures in other countries, testament to the influence of the Brussels effect (a concept coined by a Columbia Law professor, it seems).

The first. The Digital Services Act (DSA) has just begun implementing strict measures on 19 very large online platforms and search engines. These range from the obligation to label all adverts and inform users who’s behind the ads, to allowing users to turn off personalised content recommendations. As with the GDPR, the DSA’s impact extends beyond the boundaries of the EU. Any company servicing European users, no matter where it’s based, will be subject to the new rules. Interestingly, from among those 19 giant companies, only 2 are based in Europe – Booking.com, headquartered in the Netherlands, and Zalando, headquartered in Germany. The rest are predominantly from the US states of California and Washington (that’s 15 of them), and the remaining two (Alibaba and TikTok) are Chinese companies.

The second. The newly adopted EU-US Trans-Atlantic Data Privacy Framework (TADPF) ensures that European citizens’ personal data crossing the Atlantic is afforded the same level of protection in the USA as within the EU. Even the EU’s Court of Justice contributes to the Brussels effect: Before the TADPF, the court invalidated two earlier versions of transatlantic frameworks – the Safe Harbor Act and the Privacy Shield – each time sending policymakers back to the drawing board to see how they could bring US law in line with EU standards.

Respected. The GDPR, which was the first law to earn the Belgian capital its eponym, has been emulated in other countries (the so-called de jure Brussels effect). China’s Personal Information Protection Law (PIPL), for instance, was heavily influenced by the GDPR, featuring provisions on data collection, storage, and use that mirror those in the EU legislation. 

FOMO. But the Brussels effect is also feared by others. In the race to regulate emerging technologies, such as AI, countries vie to get there first, lest they be left behind by other legislation. UK members of parliament have been particularly concerned, and have urged the government to speed things up: ‘We see a danger that if the UK does not bring in any new statutory regulation for three years, it risks the government’s good intentions being left behind by other legislation – like the EU AI Act – that could become the de facto standard and be hard to displace.’

The EU has had to pay a price for its influential rulemaking, as industries are often very critical of the EU’s comparably stringent regulations, accusing it of lacking technological prowess and a competitive edge. But it’s a strategically calculated risk on the part of the EU: Brussels knows all too well that its regulatory power can’t be easily restrained or displaced.


Driverless: The future of robotaxis

The driverless car revolution is taking hold in San Francisco. Hundreds of autonomous cars, owned mainly by Google’s Waymo, General Motors’ Cruise, Uber, and Lyft, can now routinely be seen in the city streets. 

The surge in driverless vehicles comes after the California Public Utilities Commission, a state agency, voted on 11 August to allow Waymo and Cruise to take paying passengers day or night throughout San Francisco.

A small white car with an orange stripe carrying the word Cruise is parked on a street. It has an orange and white traffic cone on its hood.
Unorthodox methods: A group of protestors in San Francisco have been stopping taxis and placing traffic cones on their hoods to trigger safety alarms; the cars remain stuck until a technician resets them. Credit: Safe Street Rebel

Strong opposition. In the lead-up to the vote, the California Public Utilities Commission was faced with vigorous opposition from residents and city agencies. Transportation and safety agencies, such as the police and fire departments, and California residents, opposed expanding paid robotaxi services over safety concerns. Protestors took to the streets not only to highlight safety challenges, but over concerns that the cars were affecting resources needed for public transportation to work well, su ch as blocking a busy thoroughfare or causing congestion with unpredictable manoeuvres.

Car accidents. A few days later, reports of multiple crashes in the city forced the California Department of Motor Vehicles (DMV) to order General Motors to reduce the number of active Cruise vehicles. Residents and city agencies were proven right to worry about safety, but the DMV’s decision hasn’t been enough to assuage concerns. The protests continue.

Teething problems? Every emerging technology experiences teething problems. This turns critical when those problems threaten human life. Luckily, the passengers involved directly in these accidents only suffered non-life-threatening injuries (the claim that two Cruise vehicles that inadvertently blocked an ambulance contributed to a fatal delay in getting the pedestrian to a hospital has been refuted by the company). 

However, this raises a sobering question: What if autonomous vehicle accidents are more serious? The spectre of fatal accidents, reminiscent of Tesla’s 2016 incident, looms as a haunting reminder of the challenges and responsibilities associated with the development of self-driving technology – and the fact that there’s no guarantee that fatal car crashes won’t happen again. Most probably, they will. 

No viral effect. Until robotaxis earn people’s trust, their take-up will be relatively slow. It’s not a matter of buying an appliance after watching rave reviews or signing up on a new social media platform because half the world’s already on it.

Safety concerns form a strong barrier that could hold people back and make potential riders think twice (or even three times) before putting their lives in the hands of a driverless car. The question is: What will it take for robotaxis to earn – or definitively lose – the public’s trust?


PayPal goes where it was feared Libra would tread

It’s been four years since Facebook (now Meta) announced the launch of its digital currency, Libra. At the time, the company was mired in data privacy scandals, sealing the project’s fate before it had any time to fledge.

Fast forward. PayPal has just announced a new project: PayPal dollar, a stablecoin (the digital equivalent of fiat currencies like the US dollar, euro, and others), which is very similar to what Facebook had in mind with Libra. 

 Logo, Text

How it works. PayPal’s plans for its new stablecoin date back to 2020. Created by Paxos, a private tech company specialising in stablecoins, PayPal USD (PYUSD) was launched a few weeks ago on the Ethereum blockchain. Stablecoins’ value is directly linked to an underlying fiat currency, usually the US dollar; in this case, each PayPal dollar coin is backed 1:1 by a USD held in reserve accounts managed by Paxos and other custodians. 

Amid tighter scrutiny. Despite its standing, PayPal is operating in a market where there’s tighter regulatory scrutiny. In November, FTX, then one of the world’s biggest crypto exchanges, went bankrupt. In a related development, Paxos was ordered to stop issuing BUSD, the stablecoin developed by Binance, the world’s largest cryptocurrency exchange. In many ways, PYUSD works very similarly to how BUSD worked (instant payments, low fees). 

Stronger outlook. There are a few fundamental differences that set PayPal and its stablecoin apart. First, PayPal has a better standing in the financial sector than Facebook and Binance could  ever have hoped for. Second, policymakers today are more aware of how stablecoins work and what their benefits (and challenges) are. For instance, the fact that stablecoins are not as volatile as cryptocurrencies makes them a much safer option. PayPal is up-to-date with know-your-customer requirements, and its open-source code allows anyone to inspect it. The odds are in PayPal’s favour.

And yet, PayPal mustn’t take its watershed moment for granted. It can either contribute to regulators’ growing mistrust in cryptocurrencies, or show that stablecoins – the most popular form of cryptocurrency – are the future of digital payments.