As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon reports $18.2B profit boost as AI strategy takes off

Amazon has reported a 35% increase in quarterly profit, driven by rapid growth in its AI-powered services and cloud computing arm, Amazon Web Services (AWS).

The tech and e-commerce giant posted net income of $18.2 billion for Q2 2025, up from $13.5 billion a year earlier, while net sales rose 13% to $167.7 billion and exceeded analyst expectations.

CEO Andy Jassy attributed the strong performance to the company’s growing reliance on AI. ‘Our conviction that AI will change every customer experience is starting to play out,’ Jassy said, referencing Amazon’s AI-powered Alexa+ upgrades and new generative AI shopping tools.

AWS remained the company’s growth engine, with revenue climbing 17.5% to $30.9 billion and operating profit rising to $10.2 billion. The surge reflects the increasing demand for cloud infrastructure to support AI deployment across industries.

Despite the solid earnings, Amazon’s share price dipped more than 3% in after-hours trading. Analysts pointed to concerns over the company’s heavy capital spending, particularly its aggressive $100 billion AI investment strategy.

Free cash flow over the past year fell to $18.2 billion, down from $53 billion a year earlier. In Q2 alone, Amazon spent $32.2 billion on infrastructure, nearly double the previous year’s figure, much of it aimed at expanding its data centre and logistics capabilities to support AI workloads.

For the current quarter, Amazon projected revenue of $174.0 to $179.5 billion and operating income between $15.5 and $20.5 billion, slightly below investor hopes but still reflecting double-digit year-on-year growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gulf states reframe AI as the ‘new oil’ in post‑petroleum push

Gulf states are actively redefining national strategy by embracing AI as a cornerstone of post-oil modernization. Saudi Arabia, through its AI platform Humain, a subsidiary of the Public Investment Fund, has committed state resources to build core infrastructure and develop Arabic multimodal models. Concurrently, the UAE is funding its $100 billion MGX initiative and supporting projects like G42 and the Falcon open-source model from Abu Dhabi’s Technology Innovation Institute.

Economic rationale underpins this ambition. Observers suggest that broad AI adoption across GCC sectors, including energy, healthcare, aviation, and government services, could add as much as $150 billion to regional GDP. Yet, concerns persist around workforce limitations, regulatory maturation, and geopolitical complications tied to supply chain dependencies.

Interest in AI has also reached geopolitical levels. Gulf leaders have struck partnerships with US firms to secure advanced AI chips and infrastructure, as seen during high-profile agreements with Nvidia, AMD, and Amazon. Critics caution that hosting major data centres in geopolitically volatile zones introduces physical and strategic risks, especially in contexts of rising regional tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China says the US used a Microsoft server vulnerability to launch cyberattacks

China has accused the US of exploiting long-known vulnerabilities in Microsoft Exchange servers to launch cyberattacks on its defence sector, escalating tensions in the ongoing digital arms race between the two superpowers.

In a statement released on Friday, the Cyber Security Association of China claimed that US hackers compromised servers belonging to a significant Chinese military contractor, allegedly maintaining access for nearly a year.

The group did not disclose the name of the affected company.

The accusation is a sharp counterpunch to long-standing US claims that Beijing has orchestrated repeated cyber intrusions using the same Microsoft software. In 2021, Microsoft attributed a wide-scale hack affecting tens of thousands of Exchange servers to Chinese threat actors.

Two years later, another incident compromised the email accounts of senior US officials, prompting a federal review that criticised Microsoft for what it called a ‘cascade of security failures.’

Microsoft, based in Redmond, Washington, has recently disclosed additional intrusions by China-backed groups, including attacks exploiting flaws in its SharePoint platform.

Jon Clay of Trend Micro commented on the tit-for-tat cyber blame game: ‘Every nation carries out offensive cybersecurity operations. Given the latest SharePoint disclosure, this may be China’s way of retaliating publicly.’

Cybersecurity researchers note that Beijing has recently increased its use of public attribution as a geopolitical tactic. Ben Read of Wiz.io pointed out that China now uses cyber accusations to pressure Taiwan and shape global narratives around cybersecurity.

In April, China accused US National Security Agency (NSA) employees of hacking into the Asian Winter Games in Harbin, targeting personal data of athletes and organisers.

While the US frequently names alleged Chinese hackers and pursues legal action against them, China has historically avoided levelling public allegations against American intelligence agencies, until now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s Silk Typhoon hackers filed patents for advanced spyware tools

A Chinese state-backed hacking group known as Silk Typhoon has filed more than ten patents for intrusive cyberespionage tools, shedding light on its operations’ vast scope and sophistication.

These patents, registered by firms linked to China’s Ministry of State Security, detail covert data collection software far exceeding the group’s previously known attack methods.

The revelations surfaced following a July 2025 US Department of Justice indictment against two alleged members of Silk Typhoon, Xu Zewei and Zhang Yu.

Both are associated with companies tied to the Shanghai State Security Bureau and connected to the Hafnium group, which Microsoft rebranded as Silk Typhoon in 2022.

Instead of targeting only Windows environments, the patent filings reveal a sweeping set of surveillance tools designed for Apple devices, routers, mobile phones, and even smart home appliances.

Submissions include software for bypassing FileVault encryption, extracting remote cellphone data, decrypting hard drives, and analysing smart devices. Analysts from SentinelLabs suggest these filings offer an unprecedented glimpse into the architecture of China’s cyberwarfare ecosystem.

Silk Typhoon gained global attention in 2021 with its Microsoft Exchange ProxyLogon campaign, which prompted a rare coordinated condemnation by the US, UK, and EU. The newly revealed capabilities show the group’s operations are far more advanced and diversified than previously believed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft’s Cloud and AI strategy lifts revenue beyond expectations

Microsoft has reported better-than-expected results for the fourth quarter of its 2025 fiscal year, attributing much of its success to the continued expansion of its cloud services and the integration of AI.

‘Cloud and AI are the driving force of business transformation across every industry and sector,’ said Satya Nadella, Microsoft’s chairman and chief executive, in a statement on Wednesday.

For the first time, Nadella disclosed annual revenue figures for Microsoft Azure, the company’s cloud computing platform. Azure generated more than $75 billion in the fiscal year ending 30 June, representing a 34 percent increase compared to the previous year.

Nadella noted that this growth was ‘driven by growth across all workloads’, including those powered by AI. On average, Azure contributed approximately $19 billion in revenue per quarter.

While this trails Amazon Web Services (AWS), which posted net sales of $29 billion in the first quarter alone, Azure remains a strong second in the cloud market. Google Cloud, by comparison, has an annual run rate of $50 billion, according to parent company Alphabet’s Q2 2025 earnings report.

‘We continue to lead the AI infrastructure wave and took share each quarter this year,’ Nadella told investors during the company’s earnings call.

However, he did not provide specific figures showing how AI factored into the results, a point of interest for financial analysts given Microsoft’s projected $80 billion in capital expenditures this fiscal year to support AI-related data centre expansion.

During the call, Bernstein Research senior analyst Mark Moerdler asked how businesses might ultimately monetise AI as a software service.

Nadella responded with a broad comparison to the cloud business, suggesting the two were now deeply connected. It was left to CFO Amy Hood to offer a more structured explanation.

‘There’s a per-user logic,’ Hood explained. ‘There are tiers of per-user. Sometimes those tiers relate to consumption. Sometimes there are pure consumption models. I think you’ll continue to see a blending of these, especially as the AI model capability grows.’

In essence, Microsoft intends to monetise AI in a manner similar to its traditional software offerings—charging either per user, by usage tier, or based on consumption.

With AI now embedded across Microsoft’s portfolio of products and services, the company appears to be positioning itself to keep attributing more of its revenue to AI-powered innovation.

The numbers suggest there is plenty of revenue to go around. Microsoft posted $76.4 billion in revenue for the quarter, up 18 percent compared to the same period last year.

Operating income stood at $34.3 billion (up 23 percent), with net income reaching $27.2 billion (up 24 percent). Earnings per share climbed 24 percent to $3.65.

For the full fiscal year, Microsoft reported $281.7 billion in revenue—an increase of 15 percent. Operating income rose to $128.5 billion (up 17 percent), while net income hit $101.8 billion (up 16 percent). Annual earnings per share reached $13.64, also up by 16 percent.

Azure forms part of Microsoft’s Intelligent Cloud division, which generated $29.9 billion in quarterly revenue, a 26 percent year-on-year increase.

The Productivity and Business Processes group, which includes Microsoft 365, LinkedIn, and Dynamics, managed to earn $33.1 billion, upping its revenue by 16 percent. Meanwhile, the More Personal Computing segment, covering Windows, Xbox, and advertising, grew nine percent to $13.5 billion.

Despite some concerns among analysts regarding Microsoft’s significant capital spending and the ambiguous short-term returns on AI investments, investor confidence remains strong.

Microsoft’s share price jumped roughly eight percent after the earnings announcement, pushing its market capitalisation above $4 trillion in after-hours trading. It became only the second company, after Nvidia, to cross that symbolic threshold.

Market observers noted that while questions remain over the precise monetisation of AI, Microsoft’s aggressive positioning in cloud infrastructure and AI services has clearly resonated with shareholders.

With AI now woven into the company’s strategic fabric, Microsoft appears determined to maintain its lead in the next phase of enterprise computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!