Brussels leak signals GDPR and AI Act adjustments

The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.

Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.

Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.

Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vision AI Companion turns Samsung TVs into conversational AI platforms

Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.

Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.

Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.

With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.

It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.

By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for safeguards around emerging neuro-technologies

In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.

The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.

It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.

The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Central Bank warns of new financial scams in Ireland

The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.

Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.

Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.

Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.

The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ACCC lawsuit triggers Microsoft’s rethink and apology on Copilot subscription communications

Microsoft apologised after Australia’s regulator said it steered Microsoft 365 users to pricier Copilot plans while downplaying cheaper Classic tiers. The move follows APAC price-rise emails and confusion over Personal and Family increases.

ACCC officials said communications may have denied customers informed choices by omitting equivalent non-AI plans. Microsoft acknowledged it could have been clearer and accepted that Classic alternatives might have saved some subscribers money under the October 2024 changes.

Redmond is offering affected customers refunds for the difference between Copilot and Classic tiers and has begun contacting subscribers in Australia and New Zealand. The company also re-sent its apology email after discovering a broken link to the Classic plans page.

Questions remain over whether similar remediation will extend to Malaysia, Singapore, Taiwan, and Thailand, which also saw price hikes earlier this year. Consumer groups are watching for consistent remedies and plain-English disclosures across all impacted markets.

Regulators have sharpened scrutiny of dark patterns, bundling, and AI-linked upsells as digital subscriptions proliferate. Clear side-by-side plan comparisons and functional disclosures about AI features are likely to become baseline expectations for compliance and customer trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot