Microsoft family safety blocks Google Chrome on Windows 11

Windows 11 users have reported that Google Chrome crashes and fails to reopen when Microsoft family safety parental controls are active.

The issue appears to be linked to Chrome’s recent update, version 137.0.7151.68 and does not affect users of Microsoft Edge under the same settings.

Google acknowledged the problem and provided a workaround involving changes to family safety settings, such as unblocking Chrome or adjusting content filters.

Microsoft has not issued a formal statement, but its family safety FAQ confirms that non-Edge browsers are blocked from web filtering.

Users are encouraged to update Google Chrome to version 138.0.7204.50 to address other security concerns recently disclosed by Google.

The update aims to patch vulnerabilities that could let attackers bypass security policies and run malicious code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance debated at IGF 2025: Global cooperation meets local needs

At the Internet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of artificial intelligence governance. The discussion, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela of UNESCO, featured a rich exchange between government officials, private sector leaders, civil society voices, and multilateral organisations.

The session highlighted how AI governance is becoming a crowded yet fragmented space, shaped by overlapping frameworks such as the OECD AI Principles, the EU AI Act, UNESCO’s recommendations on AI ethics, and various national and regional strategies. While these efforts reflect progress, they also pose challenges in terms of coordination, coherence, and inclusivity.

IGF session highlights urgent need for democratic resilience online

Melinda Claybaugh, Director of Privacy Policy at Meta, noted the abundance of governance initiatives but warned of disagreements over how AI risks should be measured. ‘We’re at an inflection point,’ she said, calling for more balanced conversations that include not just safety concerns but also the benefits and opportunities AI brings. She argued for transparency in risk assessments and suggested that existing regulatory structures could be adapted to new technologies rather than replaced.

In response, Jhalak Kakkar, Executive Director at India’s Centre for Communication Governance, urged caution against what she termed a ‘false dichotomy’ between innovation and regulation. ‘We need to start building governance from the beginning, not after harms appear,’ she stressed, calling for socio-technical impact assessments and meaningful civil society participation. Kakkar advocated for multi-stakeholder governance that moves beyond formality to real influence.

Mlindi Mashologu, Deputy Director-General at South Africa’s Ministry of Communications and Digital Technology, highlighted the importance of context-aware regulation. ‘There is no one-size-fits-all when it comes to AI,’ he said. Mashologu outlined South Africa’s efforts through its G20 presidency to reduce AI-driven inequality via a new policy toolkit, stressing human rights, data justice, and environmental sustainability as core principles. He also called for capacity-building to enable the Global South to shape its own AI future.

Jovan Kurbalija, Executive Director of the Diplo Foundation, brought a philosophical lens to the discussion, questioning the dominance of ‘data’ in governance frameworks. ‘AI is fundamentally about knowledge, not just data,’ he argued. Kurbalija warned against the monopolisation of human knowledge and advocated for stronger safeguards to ensure fair attribution and decentralisation.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience

The need for transparency, explainability, and inclusive governance remained central themes. Participants explored whether traditional laws—on privacy, competition, and intellectual property—are sufficient or whether new instruments are needed to address AI’s novel challenges.

Audience members added urgency to the discussion. Anna from Mexican digital rights group R3D raised concerns about AI’s environmental toll and extractive infrastructure practices in the Global South. Pilar Rodriguez, youth coordinator for the IGF in Spain, questioned how AI governance could avoid fragmentation while still respecting regional sovereignty.

The session concluded with a call for common-sense, human-centric AI governance. ‘Let’s demystify AI—but still enjoy its magic,’ said Kurbalija, reflecting the spirit of hopeful realism that permeated the discussion. Panelists agreed that while many AI risks remain unclear, global collaboration rooted in human rights, transparency, and local empowerment offers the most promising path forward.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

IGF panel urges rethinking internet governance amid rising geopolitical tensions

At the 2025 Internet Governance Forum in Lillestrøm, Norway, a session led by the German Federal Ministry for Digital Transformation spotlighted a bold foresight exercise imagining how global internet governance could evolve by 2040. Co-led by researcher Julia Pohler, the initiative involved a diverse 15-member German task force and interviews with international experts, including Anriette Esterhuysen and Gbenga Sesan.

Their work yielded four starkly different future scenarios, ranging from intensified geopolitical rivalry and internet fragmentation to overregulation and a transformative turn toward treating the internet as a public good. A central takeaway was the resurgence of state power as a dominant force shaping digital futures.

According to Pohler, geopolitical dynamics—especially the actions of the US, China, Russia, and the EU—emerged as the primary drivers across nearly all scenarios. That marked a shift from previous foresight efforts that had emphasised civil society or corporate actors.

The panellists underscored that today’s real-world developments are already outpacing the scenarios’ predictions, with multistakeholder models appearing increasingly hollow or overly institutionalised. While the scenarios themselves might not predict the exact future, the process of creating them was widely praised.

Panellists described the interviews and collaborative exercises as intellectually enriching and essential for thinking beyond conventional governance paradigms. Yet, they also acknowledged practical concerns: the abstract nature of such exercises, the lack of direct implementation, and the need to involve government actors more directly to bridge analysis and policy action.

Looking ahead, participants called for bolder and more inclusive approaches to internet governance. They urged forums like the IGF to embrace participatory methods—such as scenario games—and to address complex issues without requiring full consensus.

The session concluded with a sense of urgency: the internet we want may still be possible, but only if we confront uncomfortable realities and make space for more courageous, creative policymaking.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Advancing digital identity in Africa while safeguarding sovereignty

A pivotal discussion on digital identity and sovereignty in developing countries unfolded at the Internet Governance Forum 2025 in Norway.

The session, co-hosted by CityHub and AFICTA (Africa ICT Alliance), brought together experts from Africa, Asia, and Europe to explore how digital identity systems can foster inclusion, support cross-border services, and remain anchored in national sovereignty.

Speakers emphasised that digital identity is foundational for bridging the digital divide and fostering economic development. Dr Jimson Olufuye, Chair of AFICTA, stressed the existential nature of identity in the digital age, noting, ‘If you cannot identify anybody, it means the person does not exist.’ He linked identity inclusion directly to the World Summit on the Information Society (WSIS) action lines and the Global Digital Compact goals.

IGF

Several national examples were presented. From Nigeria, Abisoye Coker-Adusote, Director General of the National Identity Management Commission (NIMC), shared how the country’s National Identification Number (NIN) has been integrated into banking, education, telecoms, and census services. ‘We’ve linked NINs from birth to ensure lifelong digital access,’ she noted, adding that biometric verification now underpins school enrolments, student loans, and credit programmes.

Representing Benin, Dr Kossi Amessinou highlighted the country’s ‘It’s Me’ card, a digital ID facilitating visa-free travel within ECOWAS. He underscored the importance of data localisation, asserting, ‘Data centres should be located within Africa to maintain sovereignty.’

Technical insights came from Debora Comparin, co-founder of CityHub, and Naohiro Fujie, Chair of the OpenID Foundation Japan. Comparison called for preserving the privacy characteristics of physical documents in digital forms and stressed the need for legal harmonisation to build trust across borders.

No digital identity system can work without mutual trust and clarity on issuance procedures,’ she said. Fujie shared Japan’s experience transitioning to digital credentials, including the country’s recent rollout of national ID cards via Apple Wallet, noting that domestic standards should evolve with global interoperability in mind.

Tor Alvik, from Norway’s Digitisation Agency, explained how cross-border digital identity remains a challenge even among closely aligned Nordic countries. ‘The linkage of a person’s identity between two systems is one of the hardest problems,’ he admitted, describing Norway’s regional interoperability efforts through the EU’s eIDAS framework.

Panelists agreed on key themes: digital identities must be secure, inclusive, and flexible to accommodate countries at varying digital readiness levels. They also advocated for federated data systems that protect sovereignty while enabling cooperation. Dr Olufuye proposed forming regional working groups to assess interoperability frameworks and track progress between IGF sessions.

As a forward step, several pilot programmes were proposed—pairing countries like Nigeria with neighbours Cameroon or Niger—to test cross-border digital ID systems. These initiatives, supported by tools and frameworks from CityHub, aim to lay the groundwork for a truly interoperable digital identity landscape across Africa and beyond.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

SpaceX rocket carries first quantum satellite into space

A groundbreaking quantum leap has taken place in space exploration. The world’s first photonic quantum computer has successfully entered orbit aboard SpaceX’s Transporter 14 mission.

Launched from Vandenberg Space Force Base in California on 23 June, the quantum device was developed by an international research team led by physicist Philip Walther of the University of Vienna.

The miniature quantum computer, designed to withstand harsh space conditions, is now orbiting 550 kilometres above Earth. It was part of a 70-payload cargo, including microsatellites and re-entry capsules.

Uniquely, the system performs ‘edge computing’, processing data like wildfire detection directly on board rather than transmitting raw information to Earth. The innovation drastically reduces energy use and improves response time.

Assembled in just 11 working days by a 12-person team at the German Aerospace Center in Trauen, the quantum processor is expected to transmit its first results within a week of reaching orbit.

The project’s success marks a significant milestone in quantum space technology, opening the door to further experiments in fundamental physics and applied sciences.

The Transporter 14 mission also deployed satellites from Capella Space, Starfish Space, and Varda Space, among others. Following its 26th successful flight, the Falcon 9 rocket safely landed on a Pacific Ocean platform after a nearly two-hour satellite deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New SparkKitty malware targets crypto wallets

A new Trojan dubbed SparkKitty is stealing sensitive data from mobile phones, potentially giving hackers access to cryptocurrency wallets.

Cybersecurity firm Kaspersky says the malware hides in fake crypto apps, gambling platforms, and TikTok clones, spread through deceptive installs.

Once installed, SparkKitty accesses photo galleries and uploads images to a remote server, likely searching for screenshots of wallet seed phrases. Though mainly active in China and Southeast Asia, experts warn it could spread globally.

SparkKitty appears linked to the SparkCat spyware campaign, which also targeted seed phrase images.

The malware is found on iOS and Android platforms, joining other crypto-focused threats like Noodlophile and LummaC2.

TRM Labs recently reported that nearly 70% of last year’s $2.2 billion in stolen crypto came from infrastructure attacks involving seed phrase theft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance efforts centre on human rights

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law.

Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation.

Ambassador Ernst Noorman of the Netherlands warned that human rights and security must be viewed as interconnected, stressing that unregulated AI use can destabilise societies rather than protect them. His remarks echoed the Netherlands’ own hard lessons from biassed welfare algorithms.

Other panellists, including Germany’s Cyber Ambassador Maria Adebahr, underlined how AI is being weaponised for transnational repression and emphasised Germany’s commitment by doubling funding for the FOC. Ghana’s cybersecurity chief, Divine Salese Agbeti, added that AI misuse is not exclusive to governments—citizens, too, have exploited the technology for manipulation and deception.

From the private sector, Microsoft’s Dr Erika Moret showcased the company’s multi-layered approach to embedding human rights in AI, from ethical design and impact assessments to rejecting high-risk applications like facial recognition in authoritarian contexts. She stressed the company’s alignment with UN guiding principles and the need for transparency, fairness, and inclusivity.

The discussion also highlighted binding global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, calling for their widespread adoption as vital tools in managing AI’s global impact. The session concluded with a shared call to action: governments must use regulatory tools and procurement power to enforce human rights standards in AI, while the private sector and civil society must push for accountability and inclusion.

The FOC’s statement remains open for new endorsements, standing as a foundational text in the ongoing effort to align the future of AI with the fundamental rights of all people.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI data risks prompt new global cybersecurity guidance

A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.

Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.

The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.

The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.

With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot