EU scrutinises Google over AI model data use

Ireland’s Data Protection Commission (DPC), the leading privacy watchdog for many US tech firms in the EU, is investigating Google’s handling of user data. The inquiry will examine whether Google sufficiently protected the personal information of the EU citizens before using it to develop its advanced AI model, Pathways Language Model 2 (PaLM 2). The investigation is part of a broader effort by the DPC, working alongside other EU regulators to ensure compliance with data protection laws, especially in developing AI technologies.

Why does this matter?

The investigation is the fruit of growing concerns in the EU over how tech giants handle personal data, particularly in the context of AI, which relies heavily on large datasets. The DPC’s inquiry into Google’s data practices follows a recent agreement by social media platform X (formerly known as Twitter) not to use personal data from the EU users for AI training without first offering them the option to withdraw consent.

Major data centre investment by Amazon in the UK

Amazon has announced plans to invest £8 billion in the UK to expand its data centre operations. The investment will be made by Amazon Web Services (AWS) over the next five years, aiming to meet growing demand for cloud computing, largely driven by AI advancements.

This new investment will add to AWS’s previous contributions of £3 billion since 2022, with facilities already in London and Manchester. The company expects the project to contribute £14 billion to the UK economy and support more than 14,000 jobs by the end of 2028.

AWS’s investment follows significant European cloud computing expansions, including substantial projects in Spain and Germany. After a pause last year, many corporate clients have resumed cloud spending, driven by a renewed interest in AI.

The announcement has been welcomed by the UK government, with Finance Minister Rachel Reeves highlighting its importance ahead of an upcoming investment summit. The exact locations of the new data centres will not be disclosed due to security reasons, but they will meet growing demand around London.

Russia to invest $660 million in modernising internet censorship

Russia is ramping up its efforts to control the internet by allocating nearly 60 billion roubles ($660 million) over the next five years to upgrade its web censorship system, known as TSPU. The system, developed by state regulator Roskomnadzor, is designed to filter and block content deemed harmful or illegal by the government. The funding, part of a broader ‘Cybersecurity Infrastructure’ project, will acquire new software and hardware and expand the system’s capabilities.

The initiative is seen as part of Moscow’s broader crackdown on online freedoms, which has intensified since Russia‘s invasion of Ukraine in 2022. The government has been targeting independent media and social media platforms, blocking websites, and cracking down on using Virtual Private Networks (VPNs), which many Russians use to bypass government restrictions. Roskomnadzor has been increasingly influential in blocking access to these tools, with officials planning to enhance the system’s efficiency further.

The TSPU system was introduced under a 2019 law that requires internet service providers to install government-controlled equipment to monitor and manage web traffic. As of late 2022, over 6,000 TSPU devices had been deployed across Russian networks. The new funding will modernise this infrastructure and improve the system’s ability to detect and block VPN services, making it harder for Russians to access uncensored content.

Why does this matter?

While the Kremlin continues to position these measures as necessary for national security, critics see them as a blatant attack on free speech. Digital rights activists, including those from Roskomsvoboda, warn that while new investments in censorship technology will tighten government control, it is unlikely to eliminate access to independent information. Developers of VPNs and other circumvention tools remain determined, stating that innovation and motivation are essential in the ongoing struggle between censorship and free access.

Russia’s battle with VPNs and independent media is part of a broader campaign against what it calls Western information warfare. Despite the government’s efforts to clamp down, demand for alternative ways to access the internet remains high. Developers are working on more resilient tools, even as the state pours resources into strengthening its censorship apparatus. This tug-of-war between government control and free access to information seems set to continue, with both sides ramping up their efforts.

US proposes mandatory reporting for advanced AI and cloud providers

The US Commerce Department has proposed new rules that would require developers of advanced AI and cloud computing providers to report their activities to the government. The proposal aims to ensure that cutting-edge AI technologies are safe and secure, particularly against cyberattacks.

It also mandates detailed reporting on cybersecurity measures and the results of ‘red-teaming’ efforts, where systems are tested for vulnerabilities, including potential misuse for cyberattacks or the development of dangerous weapons.

The move comes as AI, especially generative models, has sparked excitement and concern, with fears over job displacement, election interference, and catastrophic risks. Under the proposal, the collected data would help the US government enforce safety standards and protect against threats from foreign adversaries.

Why does this matter?

The following regulatory push follows President Biden’s 2023 executive order requiring AI developers to share safety test results with the government before releasing certain systems to the public. The new rules come amid stalled legislative action on AI and are part of broader efforts to limit the use of US technology by foreign powers, particularly China.

NIST releases new digital identity and AI guidelines for contractors

US National Institute of Standards and Technology (NIST) has released a new draft of its Digital Identity Guidelines, introducing updates for government contractors in cybersecurity, identity verification, and AI use. The guidelines propose expanded identity proofing methods, including remote and onsite verification options. These enhancements aim to improve the reliability of identity systems used by government contractors to access federally controlled facilities and information. By providing different assurance levels for identity verification, NIST ensures that contractors can implement secure and appropriate measures based on the context and location of the verification process.

A significant focus of the guidelines is on continuous evaluation and monitoring. Organisations are now required to implement ongoing programs that track the performance of identity management systems and evaluate their effectiveness against emerging threats. The guidelines also emphasise the importance of proactive fraud detection. Contractors and credential service providers (CSPs) must continuously assess and update their fraud detection methods to align with the evolving threat landscape.

One of the notable updates in the guidelines is the introduction of syncable authenticators and digital wallets. This allows contractors to manage their digital credentials more efficiently by storing them securely in digital wallets. These wallets provide flexibility in how contractors present their identity attributes when accessing different federal systems.

The guidelines also introduce a risk-based approach to authentication, where authentication levels are tailored to the sensitivity of the system or information being accessed. That gives government agencies the flexibility to assign different authentication methods depending on the security needs of the transaction. For example, accessing highly sensitive systems would require stronger multi-factor authentication (MFA) measures, including biometrics, while less critical systems may have less stringent requirements.

Why does this matter?

The use of AI and ML in identity systems is another key aspect of the Draft Guidelines. NIST emphasises transparency and accountability in integrating AI and ML into these systems. Organisations must document how AI is used, disclose the datasets for training models, and ensure that AI systems are evaluated for risks like bias and inequitable outcomes. The guidelines address the concern that AI technologies could exacerbate existing inequities or produce biassed results in identity verification processes. Organisations are encouraged to adopt NIST’s AI Risk Management Framework to mitigate these risks and consult its guidance on managing bias in AI.

Lastly, the guidelines highlight the importance of privacy, equity, and usability in digital identity systems. Ensuring broad participation and access to digital services, especially for individuals with disabilities, is a core requirement. NIST stresses that digital identity systems must be designed to be inclusive and accessible to all contractors, addressing any potential usability challenges while maintaining security.

Telegram founder criticises French detention

Telegram founder Pavel Durov has criticised French authorities for detaining him during an investigation into the app, suggesting they could have contacted his company through established channels instead. Durov, now a French national, made his first public statement following his detention last month, denying claims that Telegram is an ‘anarchic paradise’ and defending the app’s moderation efforts.

He expressed surprise at the investigation, pointing out that French authorities had access to a hotline specifically set up for communication with Telegram’s EU representative. Durov argued that it would have been more appropriate for legal action to target the platform rather than holding him personally responsible for third-party activities.

The investigation involves allegations of crimes such as child pornography, drug trafficking, and fraudulent transactions linked to the app. Durov emphasised that Telegram works diligently to remove harmful content, taking down millions of posts and channels daily.

Global AI framework signed to safeguard human rights

The UK has become one of the first signatories of an international treaty designed to regulate AI and prevent its misuse. This legally binding agreement, drafted by the Council of Europe and signed by countries including the EU, US, and Israel, mandates safeguards to protect human rights, democracy, and the rule of law from potential AI threats. Governments are expected to tackle risks such as AI-generated misinformation and the use of biassed data in decision-making processes.

The treaty outlines several key principles, including ensuring data protection, non-discrimination, and the responsible development of AI. Both public and private sector AI users will be required to assess the impact of AI systems on human rights and provide transparency to the public. Individuals will also have the right to challenge AI-made decisions and file complaints with relevant authorities, ensuring accountability and fairness in AI applications.

In the UK, the government is reviewing how to implement the treaty’s provisions within existing legal frameworks, such as human rights laws. A consultation on a new AI bill is underway, which could further strengthen these safeguards. Once ratified, the treaty will allow authorities to impose sanctions, including bans on certain AI uses, like systems utilising facial recognition from unauthorised data sources.

India introduces draft rules to enhance telecommunications cybersecurity

The Indian government has introduced significant draft rules on telecommunications cybersecurity, marking a substantial advancement in the regulatory framework for telecommunications. Central to these rules is the government’s authority to request traffic data from telecom providers, aimed at enhancing cybersecurity and protecting users from online fraud, particularly concerning over-the-top (OTT) services like WhatsApp and Telegram. By monitoring this data, the government seeks to identify patterns and potential threats, thereby strengthening the security of telecom networks.

Telecom companies in India must adopt comprehensive cybersecurity policies, conduct regular audits, and establish Security Operations Centers (SOCs) for real-time incident monitoring and response. Additionally, they must appoint a Chief Telecommunication Security Officer (CTSO) to ensure compliance and report any security incidents to the government within six hours. This proactive approach facilitates swift government intervention, and bolsters network resilience against cyber threats.

The draft rules also provide a framework for lawful interception of communications and temporary suspension of services for national security or public order reasons, emphasising the balance between security and individual privacy rights. Currently open for public consultation for 30 days, these rules invite feedback from stakeholders to ensure a balanced and inclusive regulatory approach.

Furthermore, the draft rules stress the protection of critical telecom infrastructure, requiring detailed record-keeping and compliance with national security directives, including the registration of telecommunications equipment identifiers.

Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.

TRAI and Google to enhance user security and combat spam in India

The Telecom Regulatory Authority of India (TRAI) and Google have introduced new regulations to enhance user security and reduce spam. These changes are particularly significant for mobile users in India, focusing on improving the safety of online transactions and the quality of applications available for download. By implementing these measures, TRAI and Google are taking proactive steps to safeguard digital interactions, ensuring users can navigate their smartphones with greater confidence and security.

A key component of this initiative is TRAI’s new directive to combat spam calls and fraudulent messages. That regulation requires telecom operators to block unregistered numbers immediately, which is intended to protect users from scams. However, this measure may delay receiving one-time passwords (OTPs) during online transactions, as institutions like banks must register and allow their numbers to continue sending OTPs without interruption. While this could cause minor inconveniences, it is a crucial step toward preventing fraudulent activities and enhancing overall security for users.

In conjunction with TRAI’s efforts, Google has ramped up its policies to remove low-quality and potentially harmful apps from its Play Store. The following initiative aims to mitigate risks associated with malware and ensure that only trustworthy applications are accessible to users. By eliminating these problematic apps, Google creates a safer environment for users to download and use applications without compromising their personal information. The crackdown on low-quality apps is expected to significantly reduce the risk of malware, providing a more secure digital experience for all users.