Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

University of Athens partners with Google to boost AI education

The National and Kapodistrian University of Athens has announced a new partnership with Google to enhance university-level education in AI. The collaboration grants all students free 12-month access to Google’s AI Pro programme, a suite of advanced learning and research tools.

Through the initiative, students can use Gemini 2.5 Pro, Google’s latest AI model, along with Deep Research and NotebookLM for academic exploration and study organisation. The offer also includes 2 TB of cloud storage and access to Veo 3 for video creation and Jules for coding support.

The programme aims to expand digital literacy and increase hands-on engagement with generative and research-driven AI tools. By integrating these technologies into everyday study, the university hopes to cultivate a new generation of AI-experienced graduates.

University officials view the collaboration as a milestone in Greek AI-driven education, following recent national initiatives to introduce AI programmes in schools and healthcare. The partnership marks a significant step in aligning higher education with the global digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI app in Europe with new Vibes video feed

Meta has launched its new AI app across Europe, featuring Vibes, an interactive feed dedicated to creating and sharing short AI-generated videos. The platform brings together media generation, remixing and collaboration tools designed to encourage creativity and social expression.

Vibes first debuted in the US, where Meta reported a tenfold rise in AI media creation since launch. European users can now use text prompts to generate, edit and animate videos, or remix existing clips by adding music, visuals and personalised styles.

The app also serves as a central hub for users’ Meta AI assistants and connected AI glasses. People can chat with the assistant, receive creative ideas, or enhance their photos and animations using advanced AI-powered editing tools integrated within the same experience.

Meta said the rollout marks a new stage in its effort to make AI-driven creativity more accessible. The company plans to expand the app’s capabilities further, promising additional features that combine entertainment, collaboration and real-time content generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO launches Beruniy Prize to promote ethical AI innovation

UNESCO and the Uzbekistan Arts and Culture Development Foundation have introduced the UNESCO–Uzbekistan Beruniy Prize for Scientific Research on the Ethics of Artificial Intelligence.

The award, presented at the 43rd General Conference in Samarkand, recognises global leaders whose research and policy efforts promote responsible and human-centred AI innovation. Each laureate received $30,000, a Beruniy medal, and a certificate.

Professor Virgilio Almeida was honoured for advancing ethical, inclusive AI and democratic digital governance. Human rights expert Susan Perry and computer scientist Claudia Roda were recognised for promoting youth-centred AI ethics that protect privacy, inclusion, and fairness.

The Institute for AI International Governance at Tsinghua University in China also received the award for promoting international cooperation and responsible AI policy.

UNESCO’s Audrey Azoulay and Gayane Uemerova emphasised that ethics should guide technology to serve humanity, not restrict it. Laureates echoed the need for shared moral responsibility and global cooperation in shaping AI’s future.

The new Beruniy Prize reaffirms that ethics form the cornerstone of progress. By celebrating innovation grounded in empathy, inclusivity, and accountability, UNESCO aims to ensure AI remains a force for peace, justice, and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!