Apple brings AI tools to apps and Siri

Apple is rolling out Apple Intelligence, its generative AI platform, across popular apps including Messages, Mail, and Notes. Introduced in late 2024 and expanded in 2025, the platform blends text and image generation, redesigned Siri features, and integrations with ChatGPT.

The AI-enhanced Siri can now edit photos, summarise content, and interact across apps with contextual awareness. Writing tools offer grammar suggestions, tone adjustments, and content generation, while image tools allow for Genmoji creation and prompt-based visuals via the Image Playground app.

Unlike competitors, Apple uses on-device processing for many tasks, prioritising privacy. More complex queries are sent to its Private Cloud Compute system running on Apple Silicon, with a visible fallback if offline. Additional features like Visual Intelligence and Live Translation are expected later in 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils AI incident reporting guidelines for critical infrastructure

India is developing AI incident reporting guidelines for companies, developers, and public institutions to report AI-related issues affecting critical infrastructure sectors such as telecommunications, power, and energy. The government aims to create a centralised database to record and classify incidents like system failures, unexpected results, or harmful impacts caused by AI.

That initiative will help policymakers and stakeholders better understand and manage the risks AI poses to vital services, ensuring transparency and accountability. The proposed guidelines will require detailed reporting of incidents, including the AI application involved, cause, location, affected sector, and severity of harm.

The Telecommunications Engineering Centre (TEC) is spearheading the effort, focusing initially on telecom and digital infrastructure, with plans to extend the standard across other sectors and pitch it globally through the International Telecommunication Union. The framework aligns with international initiatives such as the OECD’s AI Incident Monitor and builds on government recommendations to improve oversight while fostering innovation.

Why does it matter?

The draft emphasises learning from incidents rather than penalising reporters, encouraging self-regulation to avoid excessive compliance burdens. The following approach complements broader AI safety goals of India, including the recent launch of the IndiaAI Safety Institute, which works on risk management, ethical frameworks, and detection tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia announces new AI lab in UK and supercomputing wins in Europe

What began as a company powering 3D games in the 1990s has evolved into the backbone of the global AI revolution. Nvidia, once best known for its Riva TNT2 chips in consumer graphics cards like the Elsa Erazor III, now sits at the centre of scientific computing, defence, and national-scale innovation.

While gaming remains part of its identity—with record revenue of $3.8 billion in Q1 FY2026—it now accounts for less than 9% of Nvidia’s $44.1 billion total revenue. The company’s trajectory reflects its founder Jensen Huang’s ambition to lead beyond the gaming space, targeting AI, supercomputing, and global infrastructure.

Recent announcements reinforce this shift. Huang joined UK Prime Minister Sir Keir Starmer to open London Tech Week, affirming Nvidia’s commitment to launch an AI lab in the UK, as the government commits £1 billion to AI compute by 2030.

Nvidia also revealed its Rubin-Vera superchip will power Germany’s ‘Blue Lion’ supercomputer, and its Grace Hopper platform is at the heart of Jupiter—Europe’s first exascale AI system, located at the Jülich Supercomputing Centre.

Nvidia’s presence now spans continents and disciplines, from powering national research to driving breakthroughs in climate modelling, quantum computing, and structural biology.

‘AI will supercharge scientific discovery and industrial innovation,’ said Huang. And with systems like Jupiter poised to run a quintillion operations per second, the company’s growth story is far from over.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TechNext launches forecasting system to guide R&D strategy

Global R&D spending now exceeds $2 trillion a year, yet many companies still rely on intuition rather than evidence to shape innovation strategies—often at great cost.

TechNext, co-founded by Anuraag Singh and MIT’s Prof. Christopher L. Magee, aims to change that with a newly patented system that delivers data-driven forecasts for technology performance.

Built on large-scale empirical datasets and proprietary algorithms, the system enables organisations to anticipate which technologies are likely to improve most rapidly.

‘R&D has become one of the fastest-growing expenses for companies, yet most decisions still rely on intuition rather than data,’ said Singh. ‘We have been flying blind’

The tool has already drawn attention from major stakeholders, including the United States Air Force, multinational firms, VCs, and think tanks.

By quantifying the future of technologies—from autonomous vehicle perception systems to clean energy infrastructure—TechNext promises to help decision-makers avoid expensive dead ends and focus on long-term winners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman predicts AI will discover new ideas

In a new blog post titled The Gentle Singularity, OpenAI CEO Sam Altman predicted that AI systems capable of producing ‘novel insights’ may arrive as early as 2026.

While Altman’s essay blends optimism with caution, it subtly signals the company’s next central ambition — creating AI that goes beyond repeating existing knowledge and begins generating original ideas instead of mimicking human reasoning.

Altman’s comments echo a broader industry trend. Researchers are already using OpenAI’s recent o3 and o4-mini models to generate new hypotheses. Competitors like Google, Anthropic and FutureHouse are also shifting their focus towards scientific discovery.

Google’s AlphaEvolve has reportedly devised novel solutions to complex maths problems, while FutureHouse claims to have built AI capable of genuine scientific breakthroughs.

Despite the optimism, experts remain sceptical. Critics argue that AI still struggles to ask meaningful questions, a key ingredient for genuine insight.

Former OpenAI researcher Kenneth Stanley, now leading Lila Sciences, says generating creative hypotheses is a more formidable challenge than agentic behaviour. Whether OpenAI achieves the leap remains uncertain, but Altman’s essay may hint at the company’s next bold step.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!