UK user data pulled from LinkedIn’s AI development

LinkedIn has paused the use of UK user data to train its AI models after concerns were raised by the Information Commissioner’s Office (ICO). The Microsoft-owned social network had quietly opted users worldwide into data collection for AI purposes but has now responded to the UK regulator’s scrutiny. LinkedIn acknowledged the concerns and expressed willingness to engage with the ICO further.

The decision to halt AI training with UK data follows growing privacy regulations in the UK and the European Union. These rules limit how tech companies, including LinkedIn, can use personal data to develop generative AI tools like chatbots and writing assistants. Like other platforms, LinkedIn had been leveraging user-generated content to enhance these AI models but has now introduced an opt-out mechanism for UK users to regain control over their data.

Regulatory bodies like the ICO continue to monitor big tech companies, emphasising the importance of privacy rights in the development of AI. As a result, LinkedIn and other platforms may face extended reviews before resuming AI-related activities that involve user data in the UK.

US moves to ban Chinese tech in smart vehicles

The US Commerce Department is set to introduce a new regulation to ban Chinese software and hardware in autonomous and connected vehicles in the country, citing national security concerns. The proposal, expected to be announced soon, reflects growing worries from the Biden administration about the potential risks posed by Chinese companies collecting sensitive data on US drivers and infrastructure. Additionally, there are fears that foreign actors could manipulate connected vehicles, potentially creating significant security threats.

The proposed restrictions would apply to Chinese-made vehicles with communication or autonomous driving systems, escalating trade tensions between the US and China. The proposal follows last week’s move by the Biden administration to impose steep tariffs on Chinese imports, including electric vehicles and key components like batteries. Commerce Secretary Gina Raimondo has been vocal about the potential dangers of Chinese technology in US vehicles, stressing the catastrophic risks if critical software were turned off in large numbers of cars.

President Joe Biden had already initiated a review of whether Chinese vehicle imports posed security threats due to their integration with connected-car technology. The new rules could come into effect gradually, with software restrictions starting as early as the 2027 model year and hardware prohibitions beginning in 2029 or 2030. These measures would cover vehicles equipped with specific Bluetooth, satellite, wireless features and fully autonomous cars capable of operating without drivers.

Why does this matter?

US lawmakers have raised concerns about Chinese companies gathering sensitive data, and the proposed restrictions would also extend to other foreign adversaries like Russia. However, automakers, including major companies like General Motors and Toyota, have expressed worries about the time and complexity required to replace existing systems, noting that vehicle components undergo extensive testing and cannot easily be swapped.

Although Chinese-made vehicles currently make up a small fraction of US imports, the new rule aims to ensure the long-term security of connected cars on US roads. The White House recently approved the final proposal, which would not apply to specialised vehicles like those used in agriculture or mining but will impact all other sectors. The move seems a clear effort to protect the US supply chain in an increasingly connected world where cars function as ‘smartphones on wheels.’

Open Rights Group slams LinkedIn for data use in AI without consent

LinkedIn has come under scrutiny for using user data to train AI models without updating its privacy terms in advance. While LinkedIn has since revised its terms, United States users were not informed beforehand, which usually allows them time to make decisions about their accounts. LinkedIn offers an opt-out feature for data used in generative AI, but this was not initially reflected in their privacy policy.

LinkedIn clarified that its AI models, including content creation tools, use user data. Some models on its platform may also be trained by external providers like Microsoft. LinkedIn assures users that privacy-enhancing techniques, such as redacting personal information, are employed during the process.

The Open Rights Group has criticised LinkedIn for not seeking consent from users before collecting data, calling the opt-out method inadequate for protecting privacy rights. Regulatory bodies, including Ireland‘s Data Protection Commission, have been involved in monitoring the situation, especially within regions under GDPR protection, where user data is not used for AI training.

LinkedIn is one of several platforms reusing user-generated content for AI training. Others, like Meta and Stack Overflow, have also begun similar practices, with some users protesting the reuse of their data without explicit consent.

Australia to enhance privacy laws with new legislation

Australia has introduced the Privacy and Other Legislation Amendment Bill 2024, marking a pivotal advancement in addressing privacy concerns within the digital landscape. The landmark legislation establishes stringent penalties for privacy breaches, imposing sentences of up to six years in prison for general offences and up to seven years for doxxing incidents that target protected characteristics.

Furthermore, the bill enhances the enforcement powers of the Australian Information Commissioner, enabling swift action against non-compliance with privacy laws. Restoring the Australian Privacy Commissioner as a standalone position further strengthens the oversight needed to uphold privacy standards nationwide.

In its commitment to modernising privacy laws for the digital age, Australia views the Privacy and Other Legislation Amendment Bill 2024 as the initial phase of a comprehensive strategy to safeguard citizens’ privacy. The government demonstrates its resolve to hold companies and individuals accountable by significantly increasing maximum penalties for serious privacy breaches.

Additionally, recognising the importance of collaboration, the government will continue to engage with key stakeholders—including industry representatives, small businesses, consumer groups, and the media—to ensure that the approach to privacy protection is equitable and beneficial for both individuals and society.

India poised to introduce flexible consent framework and protections for children’s data

India is set to introduce an umbrella framework for consent management under the Digital Personal Data Protection (DPDP) Act, focusing on broad guidelines rather than specific rules. That approach is designed to provide flexibility for companies while ensuring they adhere to the overarching principles of data protection. Initially, organisations will be required to use government-issued identity cards for age and consent verification. However, they will eventually have the option to develop and implement their systems tailored to their needs.

Moreover, India is expected to offer certain exemptions to educational institutions, including schools, colleges, and universities, concerning the processing and obtaining parental consent for children’s data. That measure aims to alleviate the compliance burden on educational entities. In contrast, edtech companies will not benefit from these exemptions and must adhere to the full consent management rules outlined by the DPDP Act.

Furthermore, India is reinforcing its commitment to protecting children’s data by prohibiting behavioural tracking and targeted advertising for users under 18. This provision of the DPDP Act highlights the government’s focus on safeguarding young users from intrusive digital practices. It ensures that their online activities are not subject to targeted marketing strategies.

Brazil introduces comprehensive regulations for international data transfers

The Brazilian Data Protection Authority (ANPD) has introduced Resolution 19/2024, which establishes new regulations for international data transfers under the Brazilian General Data Protection Law (LGPD). Effective 23 August 2024, the regulation provides a structured framework for transferring personal data from Brazil to other countries to ensure that data protection standards are upheld.

The framework outlines Standard Contractual Clauses (SCCs), adequacy decisions for third countries, and the approval of binding corporate rules for intra-group data transfers. The Brazilian Data Protection Authority has approved Standard Contractual Clauses (SCCs) as a key instrument for international data transfers.

These SCCs cover controller-to-controller and controller-to-processor transfers, ensuring legal protection without needing prior ANPD authorisation. Similar to the EU’s SCCs, they are non-modifiable, and companies in Brazil must adopt these new clauses by 22 August 2025, replacing any existing contractual arrangements. The ANPD may also recognise equivalent SCCs from other jurisdictions, though no decision has been made yet regarding the EU SCCs.

The Brazilian Data Protection Authority also provides procedures for adequacy decisions, bespoke contractual clauses, and binding corporate rules (BCRs). Adequacy decisions will assess whether a third country offers sufficient data protection. Companies can use bespoke contractual clauses in exceptional cases with ANPD approval, while BCRs allow data transfers within corporate groups if approved by the ANPD.

China releases sensitive data guidelines

China’s National Information Security Standardization Technical Committee (TC260) introduced new guidelines titled ‘Cybersecurity Standard Practice Guidelines – Sensitive Personal Information Identification.’ These guidelines establish clear criteria for what constitutes sensitive personal information. Specifically, personal data is deemed sensitive if its unauthorised disclosure or misuse could harm an individual’s dignity, jeopardise their safety, or threaten their property.

In addition, the guidelines outline several key categories of sensitive personal information, such as biometric data, religious beliefs, specific identity details, medical and health information, financial account details, movement tracking data, and personal information of minors. Each category is illustrated with examples to assist organisations in effectively identifying and managing sensitive data.

Furthermore, the TC260 emphasises the necessity of evaluating individual data points and their combined effects when determining the sensitivity of personal information. That comprehensive approach ensures a nuanced understanding of the potential impacts of data breaches or misuse. By considering both isolated pieces of information and their possible cumulative effects, the guidelines provide a robust framework for assessing the risk levels associated with different data types.

Moreover, the TC260 underscores existing laws and regulations in China that may also define sensitive personal information. This reinforces the importance of organisations remaining informed about legal requirements and adhering to all relevant standards for safeguarding sensitive data.

Somerset to introduce AI cameras for road safety

Authorities in South West England are set to introduce new AI cameras on the A361 near Frome, Somerset, in a bid to reduce road deaths after a rise in serious crashes. The technology will be used to detect speeding, mobile phone use, and seatbelt violations. Nine people have died on this road in less than two years.

The Avon and Somerset Police have already taken action, positioning unmarked cars and using speed detection equipment on the A361. Since the start of 2023, there have been 22 serious or fatal accidents along the route. Officials aim to improve public confidence in road safety measures.

The parents of two sisters killed in a high-speed crash on the A361 last year have criticised the lack of action. They believe better speed controls could have prevented the deaths of Madison and Liberty North, aged 21 and 17, who died in July 2022.

Local authorities, led by MP Anna Sabine, are also planning further safety measures. These include improving road signage, enhancing visibility, and urging drivers to adopt safer behaviours when navigating these fast A-roads.

Palworld faces lawsuit from Nintendo and Pokémon Co

Nintendo and The Pokémon Company have sued Pocketpair Inc., the maker of ‘Palworld’, for patent infringement. The lawsuit was filed in the Tokyo District Court and aims to halt the game’s distribution, claiming multiple patent violations. Nintendo and Pokémon Co are seeking damages from the Tokyo-based game studio.

‘Palworld’ gained attention as a survival adventure game where players capture and train creatures using guns, a concept many fans dubbed ‘Pokémon with guns’. Pocketpair expressed surprise at the lawsuit, stating they had not yet been informed of the specific patents in question.

The company confirmed it would begin legal proceedings and investigations in response to the claims. It expressed frustration over being forced to divert time from game development due to the legal battle.

Earlier this year, The Pokémon Company had already warned it would pursue any intellectual property violations. Meanwhile, Pocketpair had partnered with Sony in July to promote the global licensing of ‘Palworld’.

US Senate scrutinises tech giants’ preparedness against foreign disinformation ahead of elections

On 18 September 2024, US Senate Intelligence Committee members questioned top tech executives from Google, Microsoft, and Meta about their plans to combat foreign disinformation ahead of the November elections. The executives, including Microsoft’s Brad Smith and Meta’s Nick Clegg, acknowledged the heightened risk during the 48 hours surrounding Election Day, with Smith emphasising the period as particularly vulnerable. Senator Mark Warner echoed this concern, noting that the time immediately after the polls close could also be crucial, especially in a tight race.

During the hearing, lawmakers discussed recent tactics used by foreign actors, including fake news websites mimicking reputable US media outlets and fabricated recordings from elections in other countries. Senators pressed the tech companies for detailed data on how many people were exposed to such content and the extent of its promotion. While the companies have adopted measures like labeling and watermarking to address deepfakes and misinformation, they were urged to enhance their efforts to prevent the spread of harmful content during this sensitive period.