China says the US used a Microsoft server vulnerability to launch cyberattacks

China has accused the US of exploiting long-known vulnerabilities in Microsoft Exchange servers to launch cyberattacks on its defence sector, escalating tensions in the ongoing digital arms race between the two superpowers.

In a statement released on Friday, the Cyber Security Association of China claimed that US hackers compromised servers belonging to a significant Chinese military contractor, allegedly maintaining access for nearly a year.

The group did not disclose the name of the affected company.

The accusation is a sharp counterpunch to long-standing US claims that Beijing has orchestrated repeated cyber intrusions using the same Microsoft software. In 2021, Microsoft attributed a wide-scale hack affecting tens of thousands of Exchange servers to Chinese threat actors.

Two years later, another incident compromised the email accounts of senior US officials, prompting a federal review that criticised Microsoft for what it called a ‘cascade of security failures.’

Microsoft, based in Redmond, Washington, has recently disclosed additional intrusions by China-backed groups, including attacks exploiting flaws in its SharePoint platform.

Jon Clay of Trend Micro commented on the tit-for-tat cyber blame game: ‘Every nation carries out offensive cybersecurity operations. Given the latest SharePoint disclosure, this may be China’s way of retaliating publicly.’

Cybersecurity researchers note that Beijing has recently increased its use of public attribution as a geopolitical tactic. Ben Read of Wiz.io pointed out that China now uses cyber accusations to pressure Taiwan and shape global narratives around cybersecurity.

In April, China accused US National Security Agency (NSA) employees of hacking into the Asian Winter Games in Harbin, targeting personal data of athletes and organisers.

While the US frequently names alleged Chinese hackers and pursues legal action against them, China has historically avoided levelling public allegations against American intelligence agencies, until now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China demands Nvidia explain security flaws in H20 chips

China’s top internet regulator has summoned Nvidia to explain alleged security concerns linked to its H20 computing chips.

The Cyberspace Administration of China stated that the chips, which are sold domestically, may contain backdoor vulnerabilities that could pose risks to users and systems.

Instead of ignoring the issue, Nvidia has been asked to submit technical documents and provide a formal response addressing these potential flaws.

The chips are part of Nvidia’s tailored product line for the Chinese market following US export restrictions on advanced AI processors.

The investigation signals tighter scrutiny from Chinese authorities on foreign technology amid ongoing geopolitical tensions and a global race for semiconductor dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei challenges Nvidia with AI super server

Huawei has unveiled its most powerful AI server, the CloudMatrix 384, to challenge Nvidia’s grip on the high-performance AI infrastructure market.

The system, launched at the World AI Conference in Shanghai, uses 384 Ascend 910C chips, significantly outnumbering Nvidia’s 72 B200 GPUs in the GB200 NVL72.

Although Nvidia’s GPUs remain more powerful individually, Huawei’s design relies on stacking and high-speed chip interconnection to boost overall performance.

The company claims the CloudMatrix 384 can deliver 300 petaflops of computing power, well above Nvidia’s 180 petaflops, though it consumes nearly four times more energy.

The US recently reversed its ban on Nvidia’s H20 chip exports to China, seeking to curb Huawei’s momentum. However, ongoing reports of smuggled Nvidia GPUs raise doubts over the effectiveness of these restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn raises 50 million to expand AI legal tools

LegalOn Technologies has secured 50 million dollars in Series E funding to expand its AI-powered contract review platform.

The Japanese startup, backed by SoftBank and Goldman Sachs, aims to streamline legal work by reducing the time spent reviewing and managing documents.

Its core product, Review, identifies contract risks and suggests edits using expert-built legal playbooks. The company says it improves accuracy while cutting review time by up to 85 percent across 7,000 client organisations in Japan, the US and the UK.

LegalOn plans to develop AI agents to handle tasks before and after the review process, including contract tracking and workflow integration. A new tool, Matter Management, enables teams to efficiently assign contract responsibilities, collaborate, and link documents.

While legal AI adoption grows, CEO Daniel Lewis insists the technology will support rather than replace lawyers. He believes professionals who embrace AI will gain the most leverage, as human oversight remains vital to legal judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!