The UK’s Electoral Commission has faced criticism for failing to safeguard the personal data of 40 million voters following an extensive breach that occurred in August 2021 but was only discovered in October 2022. The Information Commissioner’s Office (ICO) reported that the violation was due to the Electoral Commission’s outdated security systems, including unpatched servers and inadequate password management.
The Conservative government previously attributed the breach to Chinese hackers, leading to diplomatic tensions and sanctions from the US and its allies, including the UK and New Zealand. Despite these allegations, no confirmed evidence exists that the stolen data has been misused.
In response to the incident, the Electoral Commission has overhauled its security measures, including updating its infrastructure and implementing stricter password controls and multi-factor authentication. The Commission has assured that cybersecurity experts have validated these new measures.
China has consistently denied any wrongdoing, and the UK’s Labour Party has vowed to take a stronger stance on cyber threats and interference in British democracy. Labour plans to audit UK-China relations and introduce new cybersecurity legislation to enhance national resilience against future attacks.
Charter Communications has agreed to pay a $15 million civil penalty to settle an investigation by the Federal Communications Commission (FCC) into compliance with network and 911 outage notification rules. The FCC found that Charter violated its rules by failing to notify public safety officials and the commission about three unplanned network outages and numerous scheduled maintenance-related outages in 2023. One significant incident in February 2023 was attributed to a minor denial of service attack on Charter’s network.
Charter expressed satisfaction in resolving the issues, stating that the penalty relates primarily to administrative notification failures rather than cybersecurity violations. The company must now report certain planned maintenance activities to the FCC. Additionally, Charter failed to notify over 1,000 emergency call centres about a service disruption impacting 911 services, which further violated the FCC’s outage reporting rules.
Why does this matter?
The settlement marks the first application of specific cybersecurity measures, including network segmentation and vulnerability mitigation management, about 911 communications services and network outage reporting. FCC regulations mandate that providers like Charter inform 911 call centres of outages exceeding 30 minutes that could affect these centres as quickly as possible.
The settlement follows other significant fines in the industry, including Verizon’s $1.05 million penalty for a December 2022 outage and T-Mobile’s $19.5 million settlement in 2021 for a massive 2020 outage. Recently, the FCC reported a nationwide AT&T outage in February that blocked over 92 million voice calls and hindered more than 25,000 attempts to reach 911.
The US Justice Department has raised the alarm over TikTok’s potential influence on American politics, arguing that the app’s continued operation under ByteDance, its Chinese parent company, could enable covert interference by the Chinese government in US elections. In a recent federal court filing, prosecutors suggested that TikTok’s algorithm might be manipulated to sway public opinion and influence political discourse, posing a significant threat to national security.
The filing is part of a broader legal battle as TikTok challenges a new US law that could force a ban on the app unless its ownership is transferred by January 2025. The law, signed by President Joe Biden in April, addresses concerns over TikTok’s ties to China and its potential to compromise US security. TikTok argues that the law infringes on free speech and restricts access to information, as it targets a specific platform and its extensive global user base.
The Justice Department contends that the law aims not to suppress free speech but to address unique national security risks posed by TikTok’s connection to a foreign power. They suggest a possible solution could involve selling TikTok to an American company, allowing the app to continue operating in the US without interruption.
Why does this matter?
Concerns about TikTok’s data practices have been a focal point, with officials warning that the app collects extensive personal information from users, including location data and private messages. The department also pointed to technologies in China that could potentially influence the app’s content and raise further worries about the app’s role in data collection and content manipulation.
The debate highlights a clash between national security concerns and the protection of digital freedoms, as the outcome of the lawsuit could set a significant precedent for how the US handles foreign tech influence.
Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.
Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.
A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.
In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.
Grindr, the LGBTQ+ dating app, has deactivated some of its location-sharing features during the Olympics in Paris to protect athletes from harassment or prosecution. The ‘Explore’ feature, which allows users to change their location and view profiles, has been turned off in the Olympic Village to prevent athletes from being outed by curious individuals. That move aims to safeguard athletes, especially those from countries with strict LGBTQ+ laws, from potential risks.
Approximately 155 LGBTQ+ athletes are attending the Paris Olympics, a small fraction of the over 10,000 participants. Grindr has also turned off the ‘show distance’ feature by default in the Village, allowing athletes to connect without revealing their whereabouts. Additional temporary measures include free unlimited disappearing messages and the ability to unsend messages, while private video sharing and screenshot functions have been turned off within the Village radius.
These changes follow a precedent set after the 2016 Rio Olympics, where a journalist’s report on using Grindr to meet athletes led to accusations of outing gay athletes. Grindr’s adjustments aim to ensure privacy and safety for athletes while still allowing them to connect during the games. Meanwhile, Grindr is expanding its services to promote long-term relationships and in-person events, with its stock seeing significant growth this year.
Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.
Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.
He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.
Two European Parliament committees have formed a joint working group to oversee the implementation of the AI Act, according to sources familiar with the matter. The committees involved, Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE), are concerned about the transparency of the AI Office’s staffing and the role of civil society in the implementation process.
The European Commission’s AI Office is responsible for coordinating the implementation of the AI Act, which will come into force on 1 August. The Act prohibits certain AI applications, like real-time biometric identification, which will be enforced six months later. Full implementation is set for two years after the Act’s commencement when the Commission must clarify key provisions.
Traditionally, the European Parliament has had a limited role in regulatory implementation, but MEPs focused on tech policy are pushing for greater involvement, especially with recent digital regulations. The Parliament already monitors the implementation of the Digital Services and Digital Markets Acts, aiming to ensure effective oversight and transparency in these critical areas.
The Cyberspace Administration of China (CAC), China’s internet regulator, has publicly identified and named agents facilitating local ChatGPT access. The latest crackdown comes in the backdrop of OpenAI’s decision to restrict access to its API in ‘unsupported countries and territories’ like mainland China, Hong Kong, and Macau.
Alongside CAC, other local authorities have penalised several website operators this year for providing unauthorised access to generative AI services like ChatGPT. These measures are indicative of the CAC’s commitment to enforcing China’s AI regulations, which mandate rigorous screening and registration of all AI services before they can be publicly made available. Even with these stringent rules, some developers and businesses have managed to sidestep the regulations by using virtual private networks.
Why does this matter?
Despite Beijing’s ambition of leading the world’s AI race, it is stringent about its requirement of GenAI providers upholding core socialist values and avoiding generating content that threatens national security or the socialist system. As of January, about 117 GenAI products have been registered with the CAC, and 14 large language models and enterprise applications have been given formal approval for commercial use.
Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.
In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.
Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.
The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.
The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.
Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.