Gemini Canvas reaches millions as Google expands AI Search tools

Google has expanded access to the Canvas feature in Google Search’s AI Mode, making it available to all US users.

Canvas allows users to organise research, draft documents and develop small applications directly inside search.

Prompts can generate code, transform reports into webpages or quizzes, and produce audio summaries from uploaded material. The tool was previously introduced as part of experimental projects in Google Labs.

The feature builds on capabilities already available in Google Gemini and partly overlaps with NotebookLM, which supports research analysis and document processing.

Within Canvas, users can gather information from the web and the Google Knowledge Graph while refining projects through interaction with the Gemini model.

Competition is intensifying across AI development platforms. OpenAI and Anthropic offer similar tools, though their design approaches differ in how collaborative workspaces are triggered and used.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU citizens propose public social media network under new initiative

The European Commission has registered a European Citizens’ Initiative proposing the creation of a public social media platform operating at the European level, rather than relying exclusively on private technology companies.

An initiative titled the European Public Social Network calls for legislation establishing a publicly funded digital platform designed to serve societal interests.

Organisers argue that a publicly owned network could function independently from commercial incentives and political pressure while guaranteeing equal rights for users across the EU. The proposed platform would operate as a public service overseen by society rather than private corporations.

Registration confirms that the proposal meets the legal requirements of the European Citizens’ Initiative framework. The Commission has not yet assessed the substance of the idea, and registration does not imply support for the proposal.

Supporters must now gather 1 million signatures from citizens across at least 7 EU member states within 12 months. If the threshold is reached, the Commission will be required to formally examine the initiative and decide whether legislative action is appropriate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Guterres convenes global UN panel of 40 experts to assess AI risks

UN Secretary-General António Guterres told the inaugural meeting of a new independent group of experts on AI convened by the UN that they have a huge responsibility to help shape how the technology is used ‘for the benefit of humanity’.

‘Individually, you come from diverse regions and disciplines, bringing outstanding expertise in AI and related fields. Collectively, you represent something the world has never seen before,’ the UN chief told scientists on Tuesday at the first meeting of the Independent International Scientific Panel on AI.

The panel brings together 40 experts who aim to help close ‘the AI knowledge gap’ and assess the real impact the frontier technology will have across economies and societies so that countries can act with the same ‘clarity’ on a level playing field.

The experts will provide scientific assessments independent of any government, company or institution – including the UN itself. ‘AI is advancing at lightning speed… no country, no company, and no field of research can see the full picture alone,’ Guterres said. ‘The world urgently needs a shared, global understanding of artificial intelligence; grounded not in ideology, but in science.’

Warning about the stakes involved as AI evolves rapidly, Guterres said the technology will shape peace and security, human rights, and sustainable development for decades to come. ‘I have seen how quickly fear can take hold when facts are missing or distorted – how trust breaks down, and division deepens,’ he said. At a time when ‘geopolitical tensions are rising, and conflicts are raging,’ he stressed that the need for shared understanding and ‘safe and responsible AI could not be greater.’

As AI development accelerates, the Secretary-General also warned the panel that it is ‘in a race against time.’ Addressing concerns about the pace of technological change, he said: ‘Never in the future will we move as slowly as we are moving now. We are indeed in a high level of acceleration.’

Guterres also pointed to earlier work through the UN High-Level Advisory Body on AI, noting that the new scientific panel does not ‘start from zero’. Concluding his remarks, Guterres told the experts: ‘I can think of no more important assignment for our world today.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI Readiness Assessment Report highlights India’s progress and gaps in ethical AI

UNESCO and India’s Ministry of Electronics and Information Technology (MeitY) have launched the India AI Readiness Assessment Report during the India AI Impact Summit 2026. The report evaluates the country’s progress in building an ethical and human-centred AI ecosystem.

Developed by UNESCO with the IndiaAI Mission and Ikigai Law as implementing partner, the report draws on consultations with more than 600 stakeholders from government, academia, industry, and civil society. The assessment examined governance, workforce readiness, and infrastructure development.

Principal Scientific Adviser to the Government of India, Dr Ajay Kumar Sood, emphasised the importance of embedding ethics throughout the technology lifecycle. ‘AI is here to make an impact. The question is not how fast we adopt AI, but how thoughtfully we shape it,’ he said.

The report highlights the country’s growing role in global AI development, noting that it accounts for around 16% of the world’s AI talent and has filed more than 86,000 related patents since 2010. It also points to progress in multilingual AI systems and digital public services.

The assessment also identifies policy priorities, including stronger legal frameworks, inclusive workforce transitions, and better access to high-quality datasets. UNESCO officials said the recommendations aim to support responsible AI governance and strengthen public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces powerful and transformative voice mode for Claude Code

Anthropic has introduced a voice mode capability for Claude Code, its AI coding assistant for developers. The feature enables users to interact with the system through spoken commands, marking a step toward more conversational and hands-free coding workflows.

Voice interaction allows developers to execute programming tasks using natural language. By activating voice mode, users can verbally request actions, reflecting a broader shift toward intuitive human-AI collaboration in software development.

The rollout is currently limited, with voice mode available to a small percentage of users before wider deployment. Technical details remain unclear, including potential usage limits and whether external voice AI providers contributed to the feature’s development.

The update builds on Anthropic’s earlier integration of voice interaction in its Claude chatbot. This expansion suggests a wider strategy to embed voice interfaces across AI tools and enhance multimodal interaction experiences.

Competition in AI coding assistants continues to intensify, with multiple technology companies developing similar tools. Within this environment, Claude Code has gained strong adoption and a growing market presence among developers.

User growth and revenue indicators highlight the growing momentum of Anthropic’s AI ecosystem. The company also experienced heightened public visibility following its decision to restrict certain military uses of its AI systems, contributing to a surge in app popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Stanford researchers map schistosomiasis risk in Senegal

Stanford researchers have developed an AI-powered system that combines field surveys, drones, and satellite imagery to identify schistosomiasis risk areas across Senegal.

The project began with fieldwork in Senegal, where researchers collected aquatic vegetation and snails from more than 30 river and estuary sites. The samples helped identify environmental conditions linked to schistosomiasis, which affects about 250 million people worldwide, mostly children in sub-Saharan Africa.

Professor Giulio De Leo of Stanford’s Doerr School of Sustainability said the research required scaling beyond local sampling. ‘The work was necessary to discover these risks, but we can only do so much locally.’

Early support from the Stanford Institute for Human-Centred AI enabled the development of machine learning tools capable of identifying disease-related snails and vegetation in imagery. The system now integrates field observations with drone and satellite data to detect potential infection hotspots.

Researchers say the approach can support public health monitoring and environmental analysis. The machine learning methods developed for the project are also being applied to agriculture, forest monitoring, and mosquito-borne disease research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cisco report highlights cybersecurity risks and benefits of industrial AI

AI is becoming central to industrial networking strategies, but it is also creating new security challenges, according to Cisco’s 2026 State of Industrial AI Report.

Based on a survey of 1,000 professionals across 19 countries and 21 sectors, the report shows organisations view cybersecurity as both a barrier and an opportunity for AI adoption. About 40% cited cybersecurity concerns as a major obstacle, while 48% named security their biggest networking challenge.

At the same time, many organisations believe AI will strengthen their cyber resilience. Cisco noted that ‘while security gaps are limiting AI scale today, organisations view AI as a tool to strengthen detection, monitoring and resilience’.

The report also highlights organisational challenges, particularly collaboration between IT and operational technology teams. Only 20% of organisations report fully collaborative IT and OT cybersecurity operations, despite the growing importance of coordination for AI deployment.

Cisco said industrial AI adoption is accelerating, with 61% of organisations already deploying AI in industrial environments. However, only one in five reports mature, scaled adoption, suggesting many deployments remain in early stages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!