Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.
PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.
Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.
Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.
Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.
Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.
Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.
The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.
Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.
He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.
Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.
These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.
To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.
Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.
OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.
These measures ensure that users remain aware of what actions AI agents perform on their behalf.
While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft apologised after Australia’s regulator said it steered Microsoft 365 users to pricier Copilot plans while downplaying cheaper Classic tiers. The move follows APAC price-rise emails and confusion over Personal and Family increases.
ACCC officials said communications may have denied customers informed choices by omitting equivalent non-AI plans. Microsoft acknowledged it could have been clearer and accepted that Classic alternatives might have saved some subscribers money under the October 2024 changes.
Redmond is offering affected customers refunds for the difference between Copilot and Classic tiers and has begun contacting subscribers in Australia and New Zealand. The company also re-sent its apology email after discovering a broken link to the Classic plans page.
Questions remain over whether similar remediation will extend to Malaysia, Singapore, Taiwan, and Thailand, which also saw price hikes earlier this year. Consumer groups are watching for consistent remedies and plain-English disclosures across all impacted markets.
Regulators have sharpened scrutiny of dark patterns, bundling, and AI-linked upsells as digital subscriptions proliferate. Clear side-by-side plan comparisons and functional disclosures about AI features are likely to become baseline expectations for compliance and customer trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.
The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.
OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Circle has submitted its comments to the US Department of the Treasury, outlining its support for the GENIUS Act and calling for clear, consistent rules to govern payment stablecoin issuers.
The company emphasised that effective rulemaking could create a unified national framework for both domestic and foreign issuers, providing consumers with safer and more transparent financial products.
The firm urged Treasury to adopt a cooperative supervisory approach that promotes uniform compliance and risk management standards across jurisdictions. Circle warned against excessive restrictions that could harm liquidity, cross-border payments, or interoperability.
It also called for closing potential loopholes that might allow unregulated entities to avoid oversight while benefiting from the US dollar’s trust and stability.
Circle proposed safeguards requiring stablecoins to be fully backed, independently audited, and supported by transparent public reports. The firm stressed recognising foreign regimes, applying equal rules to all issuers, and enforcing consistent penalties.
Circle described the GENIUS Act as a chance to strengthen the stability of digital finance in the US. The company believes transparent, fully backed stablecoins and recognised foreign issuers could strengthen US leadership in secure, innovative finance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s competition authority has fined Doctolib €4.67 million for abusing its dominant position in online medical appointment booking and teleconsultation services. The regulator found that Doctolib used exclusivity clauses and tied selling to restrict competition and strengthen its market control.
Doctolib required healthcare professionals to subscribe to its appointment booking service to use its teleconsultation platform, effectively preventing them from using rival providers. Contracts also included clauses discouraging professionals from signing with competing services.
The French authority also sanctioned Doctolib for its 2018 acquisition of MonDocteur, describing it as a strategy to eliminate its main competitor. Internal documents revealed that the merger aimed to remove MonDocteur’s product from the market and reduce pricing pressure.
The decision marks the first application of the EU’s Towercast precedent to penalise a below-threshold merger as an abuse of dominance. Doctolib has been ordered to publish the ruling summary in Le Quotidien du Médecin and online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.
The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.
The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.
Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.
Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.
Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.
The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.
According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.
The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.
To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.
It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.
OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!