Hollywood figures back anti-AI campaign

More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.

The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.

Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.

The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LinkedIn phishing campaign exposes dangerous DLL sideloading attack

A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.

Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.

Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.

The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.

Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan arrests suspect over AI deepfake pornography

Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.

Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.

The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.

European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stanford and Swiss institutes unite on open AI models

Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.

The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.

The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.

Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.

Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google adds Personal Intelligence to AI Search

Google has expanded AI Search with Personal Intelligence, enabling more personalised responses using Gmail and Google Photos data. The feature aims to combine global information with individual context to deliver search results tailored to each user.

Eligible Google AI Pro and AI Ultra subscribers can opt in to securely connect their Gmail and Photos accounts, allowing Search to draw on personal preferences, travel plans, purchases, and memories.

The system uses contextual insights to generate recommendations that reflect users’ habits, interests, and upcoming activities.

Personal Intelligence enhances shopping, travel planning, and lifestyle discovery by anticipating needs and offering customised suggestions. Privacy controls remain central, with users able to manage data connections and turn off personal context at any time.

The feature is launching as an experimental Labs release for English-language users in the United States, with broader availability expected following testing. Google said ongoing feedback will guide refinements as the system continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU cyber rules target global tech dependence

The European Union has proposed new cybersecurity rules aimed at reducing reliance on high-risk technology suppliers, particularly from China. In the European Union, policymakers argue existing voluntary measures failed to curb dependence on vendors such as Huawei and ZTE.

The proposal would introduce binding obligations for telecom operators across the European Union to phase out Chinese equipment. At the same time, officials have warned that reliance on US cloud and satellite services also poses security risks for Europe.

Despite increased funding and expanded certification plans, divisions remain within the European Union. Countries including Germany and France support stricter sovereignty rules, while others favour continued partnerships with US technology firms.

Analysts say the lack of consensus in the European Union could weaken the impact of the reforms. Without clear enforcement and investment in European alternatives, Europe may struggle to reduce dependence on both China and the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!