Russia moves forward with a nationwide plan for generative AI

A broad plan to integrate generative AI across public administration and key sectors of the economy is being prepared by Russia.

Prime Minister Mikhail Mishustin explained that the new framework seeks to extend modern AI tools across regions and major industries in order to strengthen national technological capacity.

The president has already underlined the need for fully domestic AI products as an essential element of national sovereignty. Moscow intends to rely on locally developed systems instead of foreign platforms, an approach aimed at securing long-term independence and resilience.

A proposal created by the government and the Presidential Administration has been submitted for approval to establish a central headquarters that will guide the entire deployment effort.

The new body will set objectives, track progress and coordinate work across ministries and agencies while supporting broader access to advanced capabilities.

Officials in Russia view the plan as a strategic investment intended to reinforce national competitiveness in a rapidly changing technological environment.

Greater use of generative systems is expected to improve administrative efficiency, support regional development and encourage innovation across multiple sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada-EU digital partnership expands cooperation on AI and security

The European Union and Canada have strengthened their digital partnership during the first Digital Partnership Council in Montreal. Both sides outlined a joint plan to enhance competitiveness and innovation, while supporting smaller firms through targeted regulation.

Senior representatives reconfirmed that cooperation with like-minded partners will be essential for economic resilience.

A new Memorandum of Understanding on AI placed a strong emphasis on trustworthy systems, shared standards and wider adoption across strategic sectors.

The two partners will exchange best practices to support sectors such as healthcare, manufacturing, energy, culture and public services.

They also agreed to collaborate on large-scale AI infrastructures and access to computing capacity, while encouraging scientific collaboration on advanced AI models and climate-related research.

A meeting that also led to an agreement on a structured dialogue on data spaces.

A second Memorandum of Understanding covered digital credentials and trust services. The plan includes joint testing of digital identity wallets, pilot projects and new use cases aimed at interoperability.

The EU and Canada also intend to work more closely on the protection of independent media, the promotion of reliable information online and the management of risks created by generative AI.

Both sides underlined their commitment to secure connectivity, with cooperation on 5G, subsea cables and potential new Arctic routes to strengthen global network resilience. Further plans aim to deepen collaboration on quantum technologies, semiconductors and high-performance computing.

A renewed partnership that reflects a shared commitment to resilient supply chains and secure cloud infrastructure as both regions prepare for future technological demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instacart deepens partnership with OpenAI for real-time AI shopping

OpenAI and Instacart are expanding their longstanding collaboration by introducing a fully integrated grocery shopping experience inside ChatGPT.

Users can receive meal inspiration, browse products and place orders in one continuous conversation instead of switching across separate platforms.

A service that brings together Instacart’s real-time retail network with OpenAI’s most advanced models to produce an experience that feels like a direct link between a simple request and completed delivery.

The Instacart app becomes the first service to offer a full checkout flow inside ChatGPT by using the Agentic Commerce Protocol. When users mention food, ingredients or recipe ideas, ChatGPT can surface the app immediately.

Once the user connects an Instacart account, the system selects suitable items from nearby retailers and builds a complete cart that can be reviewed before payment. Users then pay securely inside the chat while Instacart manages collection and delivery through its established network.

The update also reflects broader cooperation between the two companies. Instacart continues to rely on OpenAI APIs to support personalised suggestions and real time guidance across its customer experience.

ChatGPT Enterprise assists internal teams, while Codex powers an internal coding agent that shortens development cycles instead of slowing them down with manual tasks. The partnership builds on Instacart’s early involvement in the Operator research preview, where it helped refine emerging agentic technologies.

A renewed partnership that strengthens OpenAI’s growing enterprise ecosystem. The company already works with major global brands across sectors such as retail, financial services and telecommunications.

The Instacart integration offers a view of how conversational agents may act as a bridge between everyday intent and immediate real-world action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New spyware threat alerts issued by Apple and Google

Apple and Google have issued a fresh round of cyber threat notifications, warning users worldwide they may have been targeted by sophisticated surveillance operations linked to state-backed actors.

Apple said it sent alerts on 2 December, confirming it has now notified users in more than 150 countries, though it declined to disclose how many people were affected or who was responsible.

Google followed on 3 December, announcing warnings for several hundred accounts targeted by Intellexa spyware across multiple countries in Africa, Central Asia, and the Middle East.

The Alphabet-owned company said Intellexa continues to evade restrictions despite US sanctions, highlighting persistent challenges in limiting the spread of commercial surveillance tools.

Researchers say such alerts raise costs for cyber spies by exposing victims, often triggering investigations that can lead to public scrutiny and accountability over spyware misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Growing app restrictions hit ByteDance’s AI smartphone rollout

ByteDance is facing mounting pushback after major Chinese apps restricted how its agentic AI smartphone can operate across their platforms. Developers moved to block or limit Doubao, the device’s voice-driven assistant, following concerns about automation, security and transactional risks.

Growing reports from early adopters describe locked accounts, interrupted payments and app instability when Doubao performs actions autonomously. ByteDance has responded by disabling the assistant’s access to financial services, rewards features and competitive games while collaborating with app providers to establish clearer guidelines.

The Nubia M153, marketed as an experimental device, continues to attract interest for its hands-free interface, even as privacy worries persist over its device-wide memory system. Its long-term success hinges on whether China’s platforms and regulators can align with ByteDance’s ambitions for seamless, AI-powered smartphone interaction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Polish parliament upholds presidential veto on crypto bill

Poland’s Sejm has upheld President Karol Nawrocki’s veto of the cryptoassets bill, blocking plans to place the digital asset market under the Financial Supervision Authority in line with EU MiCA rules. The attempt to override the veto failed to reach the required three-fifths majority.

Prime Minister Donald Tusk condemned the decision, warning that gaps in regulation leave parts of the cryptocurrency sector exposed to influence from Russian and Belarusian actors, organised crime groups and foreign intelligence networks.

He argued that the bill would have strengthened national security by giving authorities better tools to oversee risky segments of the market.

The president’s advisers defended the veto as protection against excessive, unclear regulation and accused the government of framing the vote as a false choice involving criminal groups.

President Nawrocki later disputed the government’s claims of foreign intelligence threats, saying no such warnings were raised during earlier consultations.

Tusk vowed to submit the bill again, insisting that swift regulation is essential to safeguard Poland’s financial system. He stated that further delays pose unnecessary risks and urged the opposition and the president to reconsider their stance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink gains ground in South Korea’s telecom market

South Korea has gained nationwide satellite coverage as Starlink enters the market and expands the country’s already advanced connectivity landscape.

The service offers high-speed access through a dense LEO network and arrives with subscription options for households, mobile users and businesses.

Analysts see meaningful benefits for regions that are difficult to serve through fixed networks, particularly in mountainous areas and offshore locations.

Enterprise interest has grown quickly. Maritime operators moved first, with SK Telink and KT SAT securing contracts as Starlink went live. Large fleets will now adopt satellite links for navigation support, remote management and stronger emergency communication.

The technology has also reached the aviation sector as carriers under Hanjin Group plan to install Starlink across all aircraft, aiming to introduce stable in-flight Wi-Fi from 2026.

Although South Korea’s fibre and 5G networks offer far higher peak speeds, Starlink provides reliability where terrestrial networks cannot operate. Industry observers expect limited uptake from mainstream households but anticipate significant momentum in maritime transport, aviation, construction and energy.

An expansion in South Korea that marks one of Starlink’s most strategic Asia-Pacific moves, driven by industrial demand and early partnerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!