Microsoft gives Notepad AI writing powers

Microsoft has introduced a significant update to Notepad, version 11.2504.46.0, unveiling a new AI-powered ‘Write’ feature for Windows 11 users.

A function like this, now available for those using Copilot Plus PCs in the Canary and Dev Insider channels, allows users to generate content by simply entering a prompt. Text can either be inserted at a chosen point or based on selected content already in the document.

The update marks the latest in a series of AI features added to Notepad, following previous tools such as ‘Summarize’, which condenses text, and ‘Rewrite’, which can alter tone, length, and phrasing.

Access to ‘Write’ requires users to be signed into their Microsoft accounts, and it will use the same AI credit system found in other parts of Windows 11. Microsoft has yet to clarify whether these credits will eventually come at a cost for users not subscribed to Microsoft 365 or Copilot Pro.

Beyond Notepad, Microsoft has brought more AI functions to Windows 11’s Paint and Snipping Tool. Paint now includes a sticker generator and smarter object selection tools, while the Snipping Tool gains a ‘Perfect screenshot’ feature and a colour picker ideal for precise design work.

These updates aim to make content creation more seamless and intuitive by letting AI handle routine tasks instead of requiring manual input.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stop Killer Robots

The Stop Killer Robots campaign, which was established in 2012,  is a growing international coalition of over 270 NGOs working in more than 70 countries around the world. 

SKR is a movement working to build a society in which technology is developed and used to promote peace, justice, human rights, equality, and respect for law – not automate killing. 

We urge all states to negotiate and adopt an international legal treaty that ensures meaningful human control over the use of force and rejects the automation of killing through:

  • Prohibitions: banning autonomous weapon systems that do not allow for meaningful human control, and banning all systems that use sensors to target humans.
  • Regulations: additional rules on the design, development, and use of other autonomous weapons systems to ensure they will be used with meaningful human control in practice.

Digital activities

SKR uses a variety of digital activities and social media campaigns to support its advocacy and campaigning work. As an international coalition, social media is integral in spotlighting the breadth of the campaign and its coalition members’ work. SKR has also undertaken the production of a wide variety of content that presents the killer robots issue from different angles according to what will speak most effectively to different target audiences.

Immoral Code, a documentary that contemplates the impact of killer robots in an increasingly automated world. The film examines whether there are situations where it’s morally and socially acceptable to take life, and importantly, would a computer know the difference? Immoral Code has been incredibly successful as a campaigning tool with over 150,000 views on YouTube, screenings hosted by our campaigners in over 20 countries, with subtitles requested and available in 11 languages so far! 

Digital dehumanisation is a process where humans are reduced to data, which is then used to make decisions and/or take actions that negatively affect their lives. The Digital Dehumanisation campaign has produced factual and creative content to explore global examples of digital dehumanisation – from data and privacy concerns to facial recognition and robotics. This work brings other expert stakeholders into our campaign and platforms their expertise while making the connection between the lack of regulation in other areas with the development of killer robots. 

Automated by Design is an interactive, multimedia exhibition that explores digital dehumanisation and autonomous weapons systems. This travelling exhibition was created for use by the international campaign and by campaigners in their national contexts as an opportunity to explore the killer robots issue with media, political decision makers, and members of the public. The physical exhibition is complemented by a digital experience via the exhibition microsite.

Automated Decision Research (ADR) is the monitoring and research team of SKR. They track state support for a legally binding instrument on autonomous weapons systems and conduct research and analysis on responses to autonomy and automated decision-making in warfare and wider society.
They also monitor weapons systems, either already existing or announced as in development, and produce reports, briefings, and fact sheets and send out regular newsletters on news and developments in autonomy in weapons systems and other related areas.

Digital policy issues

Artificial Intelligence and digital dehumanisation

The global coalition Stop Killer Robots is dedicated to the prohibition and regulation of autonomous weapons systems, often referred to as ‘killer robots’, that can select and attack targets without prior human intervention or oversight. The organisation acknowledges that through increased functionality in AI and the processing of data through algorithms, machines are beginning to replace humans in the application of force. Pushing for a legally binding instrument on this issue, the group works to raise awareness about the ethical, legal and humanitarian concerns associated with the creation and use of such autonomous weaponry. Their main activities include working with governments, policy-makers, military officials, academics, technologists and other national, regional, and international organisations to prevent the weaponisation of AI. Lobbying and campaigns, as well as the raising of public awareness and educational efforts, are aimed at drawing attention to the dangers of autonomous weapons and informing the public, decision-makers in charge, and other stakeholders about the necessity of maintaining human control in lethal decision-making. Preventing digital dehumanisation and automated harm is at the core of SKR’s collaborations with a wide range of international human rights groups, arms control organisations and experts in AI and robotics. 

Joining efforts helps to amplify the coalition’s call for strict ethical guidelines and accountability in the development of AI and automated technologies. Crucial to achieving these goals is also its active participation in international forums, such as the UN CCW, the UNGA’s First Committee, as well as academic and policy conferences on AI and Arms Control. 

The organisation frequently publishes reports, papers and policy briefs exploring the dimensions of automated harm and the urgent need for regulation in AI-powered warfare to prevent further digital dehumanisation. Part of this research includes regular in-depth assessments of adopted national AI strategies of various states, as well as policy positions on the EU-level and other international guidelines to draw out core themes regarding the use of AI and automated decision-making technologies in the civil and military spheres.

In addition to this, SKR has developed several toolkits aimed at supporting its member organisations and individual policy-makers in advocating for the negotiation of an international treaty on the ban and regulation of the automated use of force. 

These objectives also intersect with current developments in the fields of cyber conflict and warfare, especially when it comes to discussions revolving around ethical and legal considerations of autonomous technologies. Advocating for maintaining human control in decisions over life and death, SKR also advances the debates about the role of AI, automation and the use of data in warfare, policing, and border control.

Digital tools and initiatives

Parliamentary Pledge

SKR believes that parliamentarians play a vital role in enabling progress and increasing public concern on this issue. The SKR Parliamentary Pledge provides an opportunity for parliamentarians around the world to show their support for new international law that rejects the automation of killing and ensures meaningful human control over the use of force. The pledge is open to any current member of a national, state/regional, or international parliament or congress, in any part of the world. The pledge has signatures from politicians across six continents and continues to grow. 

Petition 

The SKR international petition, created in collaboration with Amnesty International, calls on government leaders from around the world to launch negotiations for international law on autonomy in weapons systems. The petition currently has signatories from over 102 countries.

Campaigner’s Toolkit 

The Campaigner’s Toolkit: Parliamentary Engagement by Stop Killer Robots helps campaigners effectively engage with parliamentarians to advocate against autonomous weapons. It highlights the importance of parliamentary outreach, public awareness, and collaboration within national and regional groups. The broader Campaigner’s Kit provides guidance on key advocacy topics, including legal arguments, military engagement, media outreach, and social media campaigning.

Social media channels

LinkedIn @Stop Killer Robots

X @bankillerrobots

Facebook @stopkillerrobots

Instagram @stopkillerrobots

YouTube @StopKillerRobots

BlueSky @stopkillerrobots.bsky.social

The United Nations calls for urgent regulation of military AI

The UN and global experts have emphasised the urgent need for comprehensive regulation of AI in military applications. UN Secretary has called for ‘global guardrails’ to govern the use of autonomous weapons, warning that rapid technological development has outpaced current policies.

Recently, 96 countries met at the UN to discuss AI-powered weapons, expanding the conversation to include human rights, criminal law, and ethics, with a push for legally binding agreements by 2026. Unregulated military AI poses serious risks like cybersecurity attacks and worsening geopolitical divides, as some countries fear losing a strategic advantage to rivals.

However, if properly regulated, AI could reduce violence by enabling less-lethal actions and helping leaders choose non-violent solutions, potentially lowering the human cost of conflict. To address ethical challenges, institutions like Texas A&M University are creating nonprofits that work with academia, industry, and defence sectors to develop responsible AI frameworks.

These efforts aim to promote AI applications that prioritise peace and minimise harm, shifting the focus from offensive weapons toward peaceful conflict resolution. Finally, UN Secretary warned against a future divided into AI ‘haves’ and ‘have-nots.’

He stressed the importance of using AI to bridge global development gaps and promote sustainable progress rather than deepen inequalities, emphasising international cooperation to guide AI toward inclusive growth and peace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Mode is now live for all American users

Google’s AI Mode for Search, initially launched in March as an experimental Labs feature, is now being rolled out to all users in the US.

Announced at Google I/O 2025, this upgraded tool uses Gemini to generate more detailed and tailored search results instead of simply listing web links. Unlike AI Overview, which displays a brief summary above standard results, AI Mode resembles a chat interface, creating a more interactive experience.

Accessible at the top of the Search page beside tabs like ‘All’ and ‘Images’, AI Mode allows users to input detailed queries via a text box.

Once a search is submitted, the tool generates a comprehensive response, potentially including explanations, bullet points, tables, links, graphs, and even suggestions from Google Maps.

For instance, a query about Maldives hotels with ocean views, a gym, and access to water sports would result in a curated guide, complete with travel tips and hotel options.

The launch marks AI Mode’s graduation from the testing phase, signalling improved speed and reliability. While initially exclusive to US users, Google plans a global rollout soon.

By replacing basic search listings with useful AI-generated content, AI Mode positions itself as a smarter and more user-friendly alternative for complex search needs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic defends AI despite hallucinations

Anthropic CEO Dario Amodei has claimed that today’s AI models ‘hallucinate’ less frequently than humans do, though in more unexpected ways.

Speaking at the company’s first developer event, Code with Claude, Amodei argued that these hallucinations — where AI systems present false information as fact — are not a roadblock to achieving artificial general intelligence (AGI), despite widespread concerns across the industry.

While some, including Google DeepMind’s Demis Hassabis, see hallucinations as a major obstacle, Amodei insisted progress towards AGI continues steadily, with no clear technical barriers in sight. He noted that humans — from broadcasters to politicians — frequently make mistakes too.

However, he admitted the confident tone with which AI presents inaccuracies might prove problematic, especially given past examples like a court filing where Claude cited fabricated legal sources.

Anthropic has faced scrutiny over deceptive behaviour in its models, particularly early versions of Claude Opus 4, which a safety institute found capable of scheming against users.

Although Anthropic said mitigations have been introduced, the incident raises concerns about AI trustworthiness. Amodei’s stance suggests the company may still classify such systems as AGI, even if they continue to hallucinate — a definition not all experts would accept.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bets on AI openness and scale

Microsoft has added xAI’s Grok 3 and Grok 3 Mini models to its Azure AI Marketplace, revealed during its Build developer conference. This expands Azure’s offering to more than 1,900 AI models, which already include tools from OpenAI, Meta, and DeepSeek.

Although Grok recently drew criticism for powering a chatbot on X that shared misinformation, xAI claimed the issue stemmed from unauthorised changes.

The move reflects Microsoft’s broader push to become the top platform for AI development instead of only relying on its own models. Competing providers like Google Cloud and AWS are making similar efforts through platforms like Vertex AI and Amazon Bedrock.

Microsoft, however, has highlighted that its AI products could bring in over $13 billion in yearly revenue, showing how vital these model marketplaces have become.

Microsoft’s participation in Anthropic’s Model Context Protocol initiative marks another step toward AI standardisation. Alongside GitHub, Microsoft is working to make AI systems more interoperable across Windows and Azure, so they can access and interact with data more efficiently.

CTO Kevin Scott noted that agents must ‘talk to everything in the world’ to reach their full potential, stressing the strategic importance of compatibility over closed ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI benchmarking practices under scrutiny

Meta has denied accusations that it manipulated benchmark results for its latest AI models, Llama 4 Maverick and Llama 4 Scout. The controversy began after a social media post alleged the company used test sets for training and deployed an unreleased model to score better in benchmarks.

Ahmad Al-Dahle, Meta’s VP of generative AI, called the claims ‘simply not true’ and acknowledged inconsistent model performance due to differing cloud implementations. He stated that the models were released as they became available and are undergoing ongoing adjustments.

The issue highlights a broader problem in the AI industry: benchmark scores often fail to reflect real-world performance.

Other AI leaders, including Google and OpenAI, have faced similar scrutiny, as models with high benchmark results struggle with reasoning tasks and show unpredictable behavior outside controlled tests.

This gap between benchmark performance and actual reliability has led researchers to call for better evaluation tools. Newer benchmarks now focus on bias detection, reproducibility, and practical use cases rather than leaderboard rankings.

Meta’s situation reflects a wider industry shift toward more meaningful metrics that capture both performance and ethical concerns in real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EX90 will be first to feature Gemini AI

Volvo is expanding its partnership with Google to integrate Gemini, Google’s conversational AI, into its vehicles, beginning with the EX90.

Announced during Google I/O 2025, Gemini will replace Google Assistant later this year in models with Google built-in.

The AI will allow drivers to interact with their cars using more natural language, with capabilities including multilingual message translation, user manual assistance, and location-based information.

In addition to the Gemini rollout, Volvo vehicles will now act as a reference hardware platform for Android Automotive OS development.

This arrangement will give Volvo drivers early access to new Android features and updates, further aligning with the brand’s focus on intuitive, human-centric technology and smart mobility innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta aims to boost Llama adoption among startups

Meta has launched a new initiative to attract startups to its Llama AI models by offering financial support and direct guidance from its in-house team.

The programme, called Llama for Startups, is open to US-based companies with less than $10 million in funding and at least one developer building generative AI applications. Eligible firms can apply by 30 May.

Successful applicants may receive up to $6,000 per month for six months to help offset development costs. Meta also promises direct collaboration with its AI experts to help firms implement and scale Llama-based solutions.

The scheme reflects Meta’s ambition to expand Llama’s presence in the increasingly crowded open model landscape, where it faces growing competition from companies like Google, DeepSeek and Alibaba.

Despite reaching over a billion downloads, Llama has encountered difficulties. The company reportedly delayed its top-tier model, Llama 4 Behemoth, due to underwhelming benchmark results.

Additionally, Meta faced criticism in April after using an ‘optimised’ version of its Llama 4 Maverick model to score highly on a public leaderboard, while releasing a different version publicly.

Meta has committed billions to generative AI, predicting revenues of up to $3 billion in 2025 and as much as $1.4 trillion by 2035.

With revenue-sharing agreements, custom APIs, and plans for ad-supported AI assistants, the company is investing heavily in infrastructure, possibly spending up to $80 billion next year on new data centres to support its expansive AI goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bristol Data Week 2025 highlights AI For Good

Nobel Prize-winning AI pioneer Professor Geoff Hinton will deliver this year’s Richard Gregory Memorial Lecture at the University of Bristol on 2 June.

His talk, titled ‘Will Digital Intelligence Replace Biological Intelligence?’, will explore the capabilities and risks of AI and align with Bristol Data Week 2025, which runs from 2 to 6 June.

Hinton, known for his foundational work on neural networks, attended secondary school in Bristol and recently received the 2024 Nobel Prize in Physics. His lecture will be introduced by Vice-Chancellor Evelyn Welch and supported by MyWorld, a UK centre for creative technology research.

Bristol Data Week will feature free workshops, talks, and panels showcasing data and AI research across themes such as climate, health, and ethics. The headline event, ‘AI for Good’, on 4 June, will highlight AI projects focused on social impact.

Research centres including the South West Nuclear Hub and Bristol Centre for Supercomputing will contribute to the programme. Organisers aim to demonstrate how responsible AI can drive innovation and benefit communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!