Meta forms AI powerhouse by appointing Shengjia Zhao as chief scientist

Meta has appointed former OpenAI researcher Shengjia Zhao as Chief Scientist of its newly formed AI division, Meta Superintelligence Labs (MSL).

Zhao, known for his pivotal role in developing ChatGPT, GPT-4, and OpenAI’s first reasoning model, o1, will lead MSL’s research agenda under Alexandr Wang, the former CEO of Scale AI.

Mark Zuckerberg confirmed Zhao’s appointment, saying he had been leading scientific efforts from the start and co-founded the lab.

Meta has aggressively recruited top AI talent to build out MSL, including senior researchers from OpenAI, DeepMind, Apple, Anthropic, and its FAIR lab. Zhao’s presence helps balance the leadership team, as Wang lacks a formal research background.

Meta has reportedly offered massive compensation packages to lure experts, with Zuckerberg even contacting candidates personally and hosting them at his Lake Tahoe estate. MSL will focus on frontier AI, especially reasoning models, in which Meta currently trails competitors.

By 2026, MSL will gain access to Meta’s massive 1-gigawatt Prometheus cloud cluster in Ohio, designed to power large-scale AI training.

The investment and Meta’s parallel FAIR lab, led by Yann LeCun, signal the company’s multi-pronged strategy to catch up with OpenAI and Google in advanced AI research.

The collaboration dynamics between MSL, FAIR, and Meta’s generative AI unit remain unclear, but the company now boasts one of the strongest AI research teams in the industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN urges global rules for AI to prevent inequality

According to Doreen Bogdan-Martin, head of the UN’s International Telecommunications Union, the world must urgently adopt a unified approach to AI regulation.

She warned that fragmented national strategies could deepen global inequalities and risk leaving billions excluded from the AI revolution.

Bogdan-Martin stressed that only a global framework can ensure AI benefits all of humanity instead of worsening digital divides.

With 85% of countries lacking national AI strategies and 2.6 billion people still offline, she argued that a coordinated effort is essential to bridge access gaps and prevent AI from becoming a tool that advances inequality rather than opportunity.

ITU chief highlighted the growing divide between regulatory models — from the EU’s strict governance and China’s centralised control to the US’s new deregulatory push under Donald Trump.

She avoided direct criticism of the US strategy but called for dialogue between all regions instead of fragmented policymaking.

Despite the rapid advances of AI in sectors like healthcare, agriculture and education, Bogdan-Martin warned that progress must be inclusive. She also urged more substantial efforts to bring women into AI and tech leadership, pointing to the continued gender imbalance in the sector.

As the first woman to lead ITU, she said her role was not just about achievement but setting a precedent for future generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK enforces age checks to block harmful online content for children

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI forces rethink of cloud infrastructure

Cybersecurity experts warn that reliance on traditional firewalls and legacy VPNs may pose greater risks than protection. These outdated tools often lack timely updates, making them prime entry points for cyber attackers exploiting AI-powered techniques.

Many businesses depend on ageing infrastructure, unaware that unpatched VPNs and web servers expose them to significant cybersecurity threats. Experts urge companies to abandon these legacy systems and modernise their defences with more adaptive, zero-trust models.

Meanwhile, OpenAI’s reported plans for a productivity suite challenge Microsoft’s dominance, promising simpler interfaces powered by generative AI. The shift could reshape daily workflows by integrating document creation directly with AI tools.

Agentic AI, which performs autonomous tasks without human oversight, also redefines enterprise IT demands. Experts believe traditional cloud tools cannot support such complex systems, prompting calls to rethink cloud strategies for more tailored, resilient platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings Gemini AI shortcut to Android home screens

Google has launched a new AI Mode shortcut in Android Search, offering direct home-screen access to its Gemini-powered tools. The upgrade brings conversational AI to everyday mobile searches, enabling users to ask complex questions and receive context-rich responses without leaving the home screen.

AI Mode, introduced in Google Labs and now available on a wide range of Android devices, marks a leap in integrating AI across Android’s ecosystem. The feature’s rise from a limited beta to mass adoption follows enhancements powered by Gemini 2.5 Pro and Deep Search, now with 100 million monthly users.

Key functions include multimodal inputs, advanced planning tools, and even the ability for AI to call businesses to verify local information. These capabilities are already live for paid subscribers, while core features remain free, drawing comparisons with rivals such as ChatGPT and Bing AI.

Privacy concerns surfaced as real-time interactions expand, but Google claims strong data protection controls are in place. As AI-powered results blend into traditional search, SEO strategies and user trust will be tested, signalling a new era in mobile discovery and digital engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women-only dating app Tea suffers catastrophic data leak

Tea, a women-only dating app, has suffered a massive data breach after its backend was found completely unsecured. Over 72,000 private images and more than 13,000 government-issued IDs were leaked online.

Some documents were dated as recently as 2025, contradicting the company’s claim that only ‘old data’ was affected. The data, totalling 59.3 GB, included verification selfies, DMs, and public posts. It spread rapidly through 4chan and decentralised platforms like BitTorrent.

Critics have blamed Tea’s use of ‘vibe coding’, AI-generated code with no proper review, which reportedly left its Firebase database open with no authentication.

Experts warn that relying on AI tools to build apps without security checks is becoming increasingly risky. Research shows nearly half of AI-generated code contains vulnerabilities, yet many startups still use it for core features. Tea users are now urged to monitor their identity and financial data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches AI feature to reshape how search results appear

Google has introduced a new experimental feature named Web Guide, aimed at reorganising search results by using AI to group information based on the query’s different aspects.

Available through Search Labs, the tool helps users explore topics in a more structured way instead of relying on the standard, linear results page.

Powered by Google’s Gemini AI, Web Guide works particularly well for open-ended or complex queries. For example, searches such as ‘how to solo travel in Japan’ would return results neatly arranged into guides, safety advice, or personal experiences instead of a simple list.

The feature handles multi-sentence questions, offering relevant answers broken into themed sections.

Users who opt in can access Web Guide via the Web tab and toggle it off without exiting the entire experiment. While it works only on that tab, Google plans to expand it to the broader ‘All’ tab in time.

The move follows Google’s broader push to incorporate Gemini into tools like AI Mode, Flow, and other experimental products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Finnish researchers extend quantum coherence beyond one millisecond

Aalto University researchers have achieved a milestone in quantum computing by extending the coherence time of a superconducting transmon qubit beyond one millisecond. The breakthrough significantly improves how long quantum states remain stable, enabling more reliable operations.

The Finnish team used ultra-pure materials and precision engineering techniques to fabricate the qubit, achieving a median coherence time of 541 microseconds and a peak of 1057 microseconds.

Unlike earlier coherence records set by more exotic qubit types, Aalto’s success came within the widely used transmon framework, making it easier for others to replicate. The researchers published detailed fabrication methods to help advance consistency across labs globally.

Crossing the millisecond mark opens new possibilities for scalable quantum systems, reducing the burden of error correction. Finland’s growing leadership in the field is further solidified as other research teams explore Aalto’s reproducible approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!