Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and Ci4CC join forces to advance AI in cancer research

Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.

The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.

Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.

The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suleyman sets limits for safer superintelligence at Microsoft

Microsoft AI says its work toward superintelligence will be explicitly ‘humanist’, designed to keep people at the top of the food chain. In a new blog post, Microsoft AI head Mustafa Suleyman announced a team focused on building systems that are subordinate, controllable, and designed to serve human interests.

Suleyman says superintelligence should not be unbounded. Models will be calibrated, contextualised, and limited to align with human goals. He joined Microsoft last year as its AI CEO, which has begun rolling out its first in-house models for text, voice, and images.

The move lands amid intensifying competition in advanced AI. Under a revised agreement with OpenAI, Microsoft can now independently pursue AGI or partner elsewhere. Suleyman says Microsoft AI will reject race narratives while acknowledging the need to advance capability and governance together.

Microsoft’s initial use cases emphasise an AI companion to help people learn, act, and feel supported; healthcare assistance to augment clinicians; and tools for scientific discovery in areas such as clean energy. The intent is to combine productivity gains with stronger safety controls from the outset.

‘Humans matter more than AI,’ Suleyman writes, casting ‘humanist superintelligence’ as technology that stays on humanity’s team. He frames the programme as a guard against Pandora’s box risks by binding robust systems to explicit constraints, oversight, and application contexts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ACCC lawsuit triggers Microsoft’s rethink and apology on Copilot subscription communications

Microsoft apologised after Australia’s regulator said it steered Microsoft 365 users to pricier Copilot plans while downplaying cheaper Classic tiers. The move follows APAC price-rise emails and confusion over Personal and Family increases.

ACCC officials said communications may have denied customers informed choices by omitting equivalent non-AI plans. Microsoft acknowledged it could have been clearer and accepted that Classic alternatives might have saved some subscribers money under the October 2024 changes.

Redmond is offering affected customers refunds for the difference between Copilot and Classic tiers and has begun contacting subscribers in Australia and New Zealand. The company also re-sent its apology email after discovering a broken link to the Classic plans page.

Questions remain over whether similar remediation will extend to Malaysia, Singapore, Taiwan, and Thailand, which also saw price hikes earlier this year. Consumer groups are watching for consistent remedies and plain-English disclosures across all impacted markets.

Regulators have sharpened scrutiny of dark patterns, bundling, and AI-linked upsells as digital subscriptions proliferate. Clear side-by-side plan comparisons and functional disclosures about AI features are likely to become baseline expectations for compliance and customer trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches Beruniy Prize to promote ethical AI innovation

UNESCO and the Uzbekistan Arts and Culture Development Foundation have introduced the UNESCO–Uzbekistan Beruniy Prize for Scientific Research on the Ethics of Artificial Intelligence.

The award, presented at the 43rd General Conference in Samarkand, recognises global leaders whose research and policy efforts promote responsible and human-centred AI innovation. Each laureate received $30,000, a Beruniy medal, and a certificate.

Professor Virgilio Almeida was honoured for advancing ethical, inclusive AI and democratic digital governance. Human rights expert Susan Perry and computer scientist Claudia Roda were recognised for promoting youth-centred AI ethics that protect privacy, inclusion, and fairness.

The Institute for AI International Governance at Tsinghua University in China also received the award for promoting international cooperation and responsible AI policy.

UNESCO’s Audrey Azoulay and Gayane Uemerova emphasised that ethics should guide technology to serve humanity, not restrict it. Laureates echoed the need for shared moral responsibility and global cooperation in shaping AI’s future.

The new Beruniy Prize reaffirms that ethics form the cornerstone of progress. By celebrating innovation grounded in empathy, inclusivity, and accountability, UNESCO aims to ensure AI remains a force for peace, justice, and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jensen Huang of Nvidia rules out China Blackwell talks for now

Nvidia CEO Jensen Huang said the company is not in active discussions to sell Blackwell-family AI chips to Chinese firms and has no current plans to ship them. He also clarified remarks about the US-China AI race, saying he intended to acknowledge China’s technical strength rather than predict an outcome.

Huang spoke in Taiwan ahead of meetings with TSMC, as Nvidia expands partnerships and pitches its platforms across regions and industries. The company has added roughly a trillion dollars in value this year and remains the world’s most valuable business despite recent share volatility.

US controls still bar sales of Nvidia’s most advanced data-centre AI chips into China, and a recent bilateral accord did not change that. Officials have indicated approvals for Blackwell remain off the table, keeping a potentially large market out of reach for now.

Analysts say uncertainty around China’s access to the technology feeds broader questions about the durability of hyperscale AI spending. Rivals, including AMD and Broadcom, are racing to win share as customers weigh long-term returns on data-centre buildouts.

Huang is promoting Nvidia’s end-to-end stack to reassure buyers that massive investments will yield productivity gains across sectors. He said he hopes policy environments eventually allow Nvidia to serve China again, but reiterated there are no active talks.

UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!