South Korea and UK to host global AI summit in Seoul

South Korea and the UK are set to co-host the second global AI summit in Seoul this week, a response to the rapid advancements in AI since the first summit in November. UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will lead a virtual summit on Tuesday, emphasising the urgent need for improved AI regulation amidst growing concerns over the impact of technology on society.

In a joint article, leaders of the UK and South Korea highlighted the necessity for global AI standards to prevent a ‘race to the bottom’. The summit, now called the AI Seoul Summit, will address AI safety, innovation, and inclusion. A recent global AI safety report underlined potential risks such as labour market disruptions, AI-enabled cyber attacks, and the loss of control over AI, stressing that societal and governmental decisions will shape the future of AI.

Why does it matter?

Initially focused on AI safety, the November summit saw prominent figures like Elon Musk and Sam Altman engage in discussions, with China signing the ‘Bletchley Declaration’ on AI risk management alongside the US and others. This week’s events will include a virtual summit on Tuesday and an in-person session on Wednesday featuring key industry players from companies like Anthropic, OpenAI, Google DeepMind, Microsoft, Meta, and IBM.

US voice actors claim AI firm illegally copied their voices

Two voice actors have filed a lawsuit against AI startup Lovo in Manhattan federal court, alleging that the company illegally copied their voices for use in its AI voiceover technology without permission. Paul Skye Lehrman and Linnea Sage claim Lovo tricked them into providing voice samples under false pretences and is now selling AI versions of their voices. They seek at least $5 million in damages for the proposed class-action suit, accusing Lovo of fraud, false advertising, and violating their publicity rights.

The actors were approached via the freelance platform Fiverr for voiceover work, with Lehrman being told his voice would be used for a research project and Sage for test scripts for radio ads. However, Lehrman later discovered AI versions of his voice in YouTube videos and podcasts, while Sage found her voice in Lovo’s promotional materials. It was revealed that their Fiverr clients were actually Lovo employees, and the company was selling their voices under pseudonyms.

The mentioned lawsuit adds to the growing list of legal actions against tech companies for allegedly misusing content to train AI systems. Lehrman and Sage seek to prevent similar misuse of voices by Lovo and other companies, emphasising the need for accountability in the AI industry. Lovo has not yet responded to the allegations.

Microsoft offers relocation to AI employees in China amidst US-China tech tensions

Microsoft is offering its China-based employees working in AI the opportunity to relocate to overseas locations such as the US, Australia, and Ireland, according to sources familiar with the matter. The offer extends to Azure cloud computing team employees, who were notified earlier this week and have until 7 June to decide. Those who opt not to relocate can remain with the China team, although Microsoft has halted new hiring in China, eliminating job openings.

The relocation program affects approximately 700 to 800 people, primarily those engaged in machine learning. Microsoft has offices in Beijing, Shanghai, and Suzhou but has not responded to requests for comment regarding the relocation offer. Last year, Microsoft relocated some of its top AI researchers from China to a new research lab in Vancouver, Canada, as part of its broader AI strategy.

Why does it matter?

The offer to the employees comes amidst escalating geopolitical tensions between the US and China, which have increasingly impacted corporate decisions. At a bilateral meeting in Geneva, US officials expressed concerns about the misuse of AI, particularly by China. The Biden administration is considering new restrictions on exporting proprietary AI models to China, reflecting growing scrutiny over technology transfer.

Despite these tensions, Microsoft remains committed to its AI services in mainland China and Hong Kong, distinguishing itself from competitors like OpenAI and Google, which have restricted access to their AI products in these regions. The potential restrictions on AI software exports would add to existing limitations on Chinese firms’ access to advanced semiconductor technology, further complicating US-China relations in the tech sector.

US lawmakers introduce AI export bill

A bipartisan group of lawmakers introduced a bill to strengthen the Biden administration’s ability to regulate the export of AI models, focusing on protecting US technology from potential misuse by foreign competitors. Sponsored by both Republicans and Democrats, the bill proposes granting the Commerce Department explicit authority to control AI exports deemed risky to national security and to prohibit collaboration between Americans and foreigners on such systems.

The bill points to strengthening legal oversight due to the pressing need to protect US AI technology from hostile exploitation. The emerging concerns are advanced AI models, which can process vast amounts of data and generate content that adversaries could exploit for cyber attacks or even the development of biological weapons.

While the Commerce Department and the White House have yet to comment on the bill, reports suggest that the US is gearing up to implement export controls on proprietary AI models to counter threats China and Russia pose. Current US laws make it challenging to regulate the export of open-source AI models, which are freely accessible. The legal measure would, therefore, streamline regulations, particularly regarding open-source AI, and grant the Commerce Department enhanced oversight over AI systems if approved.

Why does it matter?

The introduction of this bill is set against the backdrop of intensifying global competition in AI development. China, for instance, heavily relies on open-source models like Meta Platforms’ ‘Llama’ series. Recent revelations about using these models by Chinese AI firms have raised concerns about intellectual property and security risks. Furthermore, Microsoft’s significant investment in a UAE-based AI firm, G42, has sparked a debate over the implications of deepening ties between Gulf states and China, leading to security agreements between the US, UAE, and Microsoft.

Meta Platforms faces heavy fine in Turkey over data-sharing

Turkey’s competition board has levied a substantial fine of 1.2 billion lire ($37.20 million) against Meta Platforms following investigations into data-sharing practices across its social media platforms, including Facebook, Instagram, WhatsApp, and Threads. The board launched an inquiry last December, particularly focusing on potential competition law violations related to integrating Threads and Instagram.

As part of its findings, the competition board imposed an interim measure in March to restrict data sharing between Threads and Instagram. In response, Meta announced the temporary shutdown of Threads in Turkey to comply with the interim order, reflecting the company’s efforts to adhere to regulatory directives.

The fine encompasses two separate investigations, with 898 million lira attributed to the compliance process and investigations related to Facebook, Instagram, and WhatsApp, and an additional 336 million lira for the inquiry into Threads. The board’s decision emphasises the importance of user consent and notification regarding data usage, ensuring transparency and control over personal data across Meta’s platforms.

Previously, the competition board had imposed fines on Meta, including daily penalties for insufficient documentation and notifications about data-sharing. While these penalties concluded on 3 May 2024, the recent fine extends the ongoing regulatory scrutiny over Meta’s business practices, echoing similar actions taken by regulatory authorities globally to ensure compliance with competition and data protection laws.

TikTok sues US government over law mandating ban or divestment

TikTok has filed a lawsuit against the US government, challenging a new law that requires the app to sever ties with its Chinese parent company, ByteDance, or face a ban in the US. The company argues that the law is unconstitutional and deems it impossible to sell the app from ByteDance, stating that it would instead result in a shutdown by 19 January 2025.

Namely, the law, signed by President Joe Biden last month, grants ByteDance nine months to divest TikTok or cease its operations in the US, citing national security concerns. However, TikTok’s complaint argues that the government has not presented sufficient evidence of the Chinese government misusing the app. Concerns expressed by individual members of Congress and a congressional committee report are speculative about the potential misuse of TikTok in the future without citing specific instances of misconduct. However, TikTok asserts that it has operated prominently in the US since its launch in 2017.

The app contends that a ban in the US would be unfeasible due to the complex task of transferring millions of lines of software code from ByteDance to a new owner. Additionally, restrictions imposed by the Chinese government would prevent the sale of TikTok along with its algorithm. TikTok argues that such a ban would effectively isolate American users and undermine its business, mentioning also its previous efforts to address US government concerns.

During the Trump administration, discussions were held regarding partnerships with American companies such as Walmart, Microsoft, and Oracle to separate TikTok’s US operations. However, these potential deals have yet to materialise. TikTok also attempted to appease the government by storing US user data in Oracle’s servers, although a recent report suggests that this action was primarily cosmetic.

TikTok seeks a court judgement to declare the Biden administration’s legislation unconstitutional in response to the new law. The company also requests an order to prevent the attorney general from enforcing the law.

China suspected of massive cyberattack on UK’s Ministry of Defence

According to reports, a significant cyberattack targeted the UK Ministry of Defence, exposing the sensitive details of tens of thousands of armed forces personnel. The breach, believed to have occurred multiple times on a third-party payroll system, prompted the MoD to assess the extent of the hack over three days. While the Ministry has not confirmed any data theft, it reassured service members about their safety amid the incident.

The attack follows earlier attributions of cyberattacks to Chinese ‘state-affiliated actors’ in the UK between 2021 and 2022. In March, Deputy Prime Minister Oliver Dowden disclosed sanctions against individuals and a company linked to the Chinese state for alleged malicious cyber activities, including attacks on the Electoral Commission. These actions underscore a growing concern over cyber threats originating from China.

While Chinese President Xi Jinping embarked on a European tour, the cyberattack allegations persisted, with French lawmakers targeted by similar incidents urging an official investigation. Despite mounting accusations, French authorities refrained from directly attributing the attacks to China, contrasting with formal accusations made by the US, UK, and New Zealand. As President Xi continues his diplomatic engagements in Europe, with planned visits to Serbia and Hungary, the cybersecurity landscape remains a pressing issue, with nations navigating the complexities of state-sponsored cyber activities.

US wireless carriers fined millions for sharing customers’ personal data

The US government has issued draconian fines against major wireless carriers AT&T, Sprint, T-Mobile, and Verizon following an investigation revealing the unauthorised sharing of customers’ personal data. The sanctions stem from 2020 allegations by the Federal Communications Commission (FCC) that the carriers had unlawfully shared users’ geolocation histories with third parties, including prisons, as part of their commercial programs. The fines target sharing user location information with data resellers, known as ‘location aggregators,’ who then distribute the data to third-party customers.

AT&T faces a fine of $57 million, while Verizon was fined nearly $47 million. Sprint received a $12 million fine, and T-Mobile was fined $80 million. Despite promises to cease the practice after the issue came to light in 2018, carriers continued for nearly a year or longer, according to the FCC. The investigation, initiated during the Trump administration, revealed that carriers attempted to shift responsibility for obtaining customer consent onto downstream recipients of location information, often resulting in no valid customer consent.

Responding to the fines, all wireless carriers intend to appeal the FCC’s decision. AT&T, Verizon, and T-Mobile assert that the FCC’s order lacks legal and factual merit, with each carrier highlighting its efforts to address the situation and emphasising its commitment to customer privacy. T-Mobile, in particular, discontinued its location data-sharing program five years ago and plans to challenge the decision, stating that the fine is excessive.

The investigation into unauthorised data sharing gained stimulus in 2018 when Oregon Democratic Senator Ron Wyden’s probe revealed that cellphone location information had made its way to Securus, a provider of prison phone services. Wyden commended the FCC for holding the companies accountable and stressed the importance of protecting customer privacy and safety.

NOYB files a privacy complaint against OpenAI’s ChatGPT

OpenAI, a startup supported by Microsoft, faces a privacy complaint from the European Center for Digital Rights (NOYB), an advocacy group, for allegedly failing to address incorrect information provided by its AI chatbot, ChatGPT, which could violate the EU privacy regulations. ChatGPT, renowned for its ability to mimic human conversation and perform various tasks, including summarising texts and generating ideas, has been scrutinised after reportedly providing inaccurate responses to queries about a public figure’s birthday.

NOYB claims that despite the complainant’s requests, OpenAI refused to rectify or erase the erroneous data, citing technical limitations. Additionally, the group alleges that OpenAI did not disclose crucial information regarding data processing, sources, or recipients, prompting NOYB to file a complaint with the data protection authority in Austria.

According to NOYB’s data protection lawyer, Maartje de Graaf, the incident underscores the challenge of ensuring compliance with the EU law when processing individuals’ data using chatbots like ChatGPT. She emphasised the necessity for technology to adhere to legal requirements rather than vice versa.

OpenAI has previously acknowledged ChatGPT’s tendency to provide plausible yet incorrect responses, citing it as a complex issue. However, NOYB’s complaint highlights the urgency for companies to ensure the accuracy and transparency of personal data processed by large language models like ChatGPT.

WhatsApp threatens shutdown over encryption demands in India

WhatsApp and Facebook are challenging India’s amended IT Rules, claiming they infringe on privacy rights and are unconstitutional. At a Delhi High Court hearing, WhatsApp argued that being forced to decrypt messages could shut down their service. A key issue is Rule 4(2), which mandates social media companies to trace the original source of messages under certain conditions. WhatsApp contends this would require them to store messages for years, a demand not made in any other country, including Brazil.

The Indian government argues that these companies, which profit from user data, don’t have a basis to claim they protect user privacy. The government insists these rules are vital for law enforcement to track false messages and uphold public order. The Ministry of Electronics and Information Technology supports the rules, stating they meet global standards and ensure accountability of digital platforms, keeping the internet secure and respecting citizen rights. The case has been adjourned to August 14 for further consideration.

Why does it matter?

Since adopting end-to-end encryption in 2016, WhatsApp has prioritised privacy and security. In India, where it is the leading messaging app with over 900 million users, it has become a key tool for government communications. Over the years, WhatsApp has expanded its reach to include various government bodies that use it to disseminate vital information. With such a vast user base and an important role in public communication, the outcome of this situation could have dramatic consequences for India’s informational ecosystem.