Data sovereignty becomes an infrastructure strategy in the AI era

For most of the past decade, data governance was treated as a legal issue. IT built networks and bought tools, while regulators were someone else’s problem. That division no longer holds. Cloud adoption and AI have turned data sovereignty into a core infrastructure and strategy question.

Regulatory frameworks such as GDPR, NIS2, and DORA are expanding and being enforced more strictly. Governments are also scrutinising foreign cloud providers and cross-border access. Local data storage no longer ensures absolute data sovereignty if critical control layers remain outside national jurisdiction.

Traditional SASE and SSE models were not built for this environment. Many still separate outbound cloud traffic from inbound controls. That split creates blind spots in distributed architectures and complicates consistent policy enforcement.

AI workloads intensify the pressure. Retailers, banks, and manufacturers are deploying models locally, not just in hyperscale clouds. Securing east-west traffic across systems and APIs without undermining data sovereignty is becoming a central architectural challenge.

Managed sovereign infrastructure is one response. It reduces reliance on external cloud paths while preserving operational scale. Ultimately, organisations must align security, AI deployment, and governance with long-term resilience goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nano Banana 2 brings Flash speed to Gemini image generation

Google has introduced Nano Banana 2, branded Gemini 3.1 Flash Image, combining Flash speed with advanced reasoning. The update narrows the gap between rapid generation and visual quality, enabling faster edits. Improved instruction-following enhances the handling of complex prompts.

Nano Banana 2 integrates real-time web grounding to improve subject accuracy and contextual awareness. The model supports more precise text rendering and in-image translation for marketing and localisation tasks. It can also assist with diagrams, infographics, and data visualisations.

Upgrades include stronger subject consistency across multiple characters and objects within a single workflow. Users can create assets in aspect ratios and resolutions from 512px to 4K. Google highlighted improvements in lighting, textures, and photorealism while maintaining Flash-level speed.

The model is rolling out across the Gemini app, Search, Lens, AI Studio, Vertex AI, Flow, and Google Ads. In Gemini, Nano Banana 2 replaces Nano Banana Pro by default, though Pro remains available for specialised tasks. Availability is expanding to additional countries and languages.

Google also reinforced its provenance strategy by combining SynthID watermarking with C2PA Content Credentials. The company said verification tools in Gemini have been used millions of times to identify AI-generated media. C2PA verification will be added to the app in a future update.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pakistan’s digital transformation highlighted as UNESCO advances AI ethics

UNESCO used the Pakistan Governance Forum 2026 to highlight the need for a structured Ethical AI and Data Governance Framework as the country accelerates its digital transformation.

Federal leaders, provincial authorities and civil society convened to examine governance reforms, with UNESCO urging Pakistan to align its expanding digital public infrastructure with coherent standards that protect rights while enabling innovation.

Speaking at the Forum, Fuad Pashayev underlined that Pakistan’s reform priority should centre on the Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by all 193 Member States.

Anchoring national systems in transparency, accountability and meaningful human oversight was framed as essential for maintaining public trust as digital services reshape access to benefits and interactions between citizens and the state.

To support the shift, UNESCO promoted its AI Readiness Assessment Methodology (RAM), which is already deployed in more than 50 countries. The tool helps governments identify regulatory gaps, strengthen institutional coordination and design safeguards against discrimination and algorithmic bias.

UNESCO has already contributed to Pakistan’s draft National AI Policy, ensuring alignment with international ethical frameworks while accommodating national development needs.

Capacity building formed a major pillar of UNESCO’s engagement. In partnership with the University of Oxford, the organisation launched a global course on AI and Digital Transformation in Government in 2025, attracting over nineteen thousand enrolments worldwide.

Pakistan leads participation globally, reflecting both the country’s momentum and growing demand for structured training.

UNESCO’s ongoing work aims to reinforce data governance, improve AI readiness and embed ethical safeguards across Pakistan’s digital transformation strategy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google API keys exposed after Gemini privilege expansion

Security researchers warn that exposed Google API keys in public client-side code could be used to authenticate with the Gemini AI assistant and access private data. The issue arose after developers enabled the Generative Language API in existing projects without updating key permissions.

Truffle Security scanned the November 2025 Common Crawl dataset and identified more than 2,800 live Google API keys publicly exposed in website source code. Some belonged to financial institutions, security firms, recruitment companies, and Google infrastructure.

Before Gemini’s launch, Google Cloud API keys were widely treated as non-sensitive identifiers for services such as Maps, YouTube embeds, analytics, and Firebase. After Gemini was introduced, those duplicate Google API keys also acted as authentication credentials for the AI assistant, expanding their privileges.

Researchers demonstrated the risk by using one exposed key to query the Gemini API models endpoint and list available models. They warned that attackers could exploit such access to extract private data or generate substantial API charges on victim accounts.

Google was notified in November 2025 and later classified the issue as a single-service privilege escalation. The company said it has introduced controls to block leaked keys, limit new AI Studio keys to Gemini-only scope, and notify developers of detected exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces renewed pressure to ease industrial AI rules

European governments are renewing pressure to scale back industrial AI rules rather than expand regulatory demands.

Ten countries, including Germany, France, Italy, Spain and Poland, have urged the EU to clarify how the AI Act overlaps with machinery law and to adopt more realistic implementation deadlines. Their position is even more surprising, given that the legislation already outlines its relationship with existing industrial frameworks.

Parliament’s centre and centre-right groups are pushing for deeper cuts. The European People’s Party wants all industrial sectors to move to a lighter regime, while Renew is advocating broad exemptions for industrial and business-to-business AI.

The European Conservatives and Reformers are also seeking reductions for non-safety-related systems. Together, the three groups edge close to a parliamentary majority, signalling momentum for a broader deregulation push.

No sweeping changes have been added to the AI omnibus so far, yet policymakers expect more adjustments ahead. The package must be finalised by August, so legislators are focused on meeting the deadline instead of reopening primary debates.

Broader revisions to industrial AI rules are likely to reappear in the Commission’s forthcoming Digital Fitness Check, which will reassess how multiple EU tech laws interact.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Origin Pilot launch expands access to China’s quantum computing technology

China has made its self-developed quantum computer operating system, Origin Pilot, available for public download, marking a significant step toward expanding access to quantum computing technology. Officials expect the move to lower barriers to development and accelerate the growth of the national quantum ecosystem.

Developed by Hefei-based Origin Quantum Computing Technology, the system was first introduced in 2021 and has undergone several upgrades. The platform now supports multiple technological approaches, including superconducting, ion-trap, and neutral-atom quantum processors.

Origin Pilot manages key computing functions, including resource scheduling and coordination between software and hardware systems. Features including parallel task processing and automatic qubit calibration aim to improve the efficiency and stability of quantum operations.

Opening unified programming interfaces allows research institutions, universities and developers worldwide to connect to Chinese quantum chips and conduct programming through independent frameworks. Project leaders say users can download the system directly from the company’s official website and begin quantum development activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenClaw creator Peter Steinberger urges playful approach to AI coding

Peter Steinberger, creator of the viral AI agent OpenClaw and now at OpenAI, urged developers to approach AI experimentation with curiosity rather than rigid plans. On the Builders Unscripted podcast, he said progress often comes from exploration rather than expertise.

He said OpenClaw began without a roadmap. Early tests included a WhatsApp integration he paused, expecting major labs to build similar tools. When that did not happen, he developed his own prototype and refined it through real-world use.

Using the tool in low-connectivity environments helped clarify its value. Through trial and iteration, he observed how modern AI models can generate workable solutions without explicit programming, reshaping how developers think about problem-solving and workflows.

He cautioned that coding with AI is a skill that requires practice. Comparing it to learning guitar, Steinberger said early frustration is common, but persistence leads to improved intuition and efficiency over time.

Steinberger argued that developers who focus on solving problems and creating useful tools will remain in demand. Treating AI as a collaborative instrument rather than a shortcut, he said, is essential in a rapidly shifting technology landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!