Meta boosts AGI efforts with new team

Mark Zuckerberg, Meta Platforms CEO, is reportedly building a new team dedicated to achieving artificial general intelligence (AGI), aiming for machines that can match or exceed human intellect.

The initiative is linked to an investment exceeding $10 billion in Scale AI, whose founder, Alexandr Wang, is expected to join the AGI group. Meta has not yet commented on these reports.

Zuckerberg’s personal involvement in recruiting around 50 experts, including a new head of AI research, is partly driven by dissatisfaction with Meta’s recent large language model, Llama 4. Last month, Meta even delayed the release of its flagship ‘Behemoth’ AI model due to internal concerns about its performance.

The move signals an intensifying race in the AI sector, as rivals like OpenAI are also making strategic adjustments to attract further investment in their pursuit of AGI. This highlights a clear push by major tech players towards developing more advanced and capable AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workers struggle as ChatGPT goes down

The temporary outage of ChatGPT this morning left thousands of users struggling with their daily tasks, highlighting a growing reliance on AI.

Social media was flooded with humorous yet telling posts from users expressing their inability to perform even basic functions without AI. This incident has reignited concerns about society’s increasing dependence on closed-source AI tools for work and everyday life.

OpenAI, the developer of ChatGPT, is currently investigating the technical issues that led to ‘elevated error rates and latency.’ The widespread disruption underscores a broader debate about AI’s impact on critical thinking and productivity.

While some research suggests AI chatbots can enhance efficiency, others, like Paul Armstrong, argue that frequent reliance on generative tools may diminish critical thinking skills and understanding.

The discussion around AI’s role in the workplace was a key theme at the recent SXSW London event. Despite concerns about job displacement, exemplified by redundancies at Canva, firms like Lloyd’s Market Association are increasingly adopting AI, with 40% of London market companies now using it.

Industry leaders maintain that AI aims to rethink workflows and empower human creativity, with a ‘human layer’ remaining essential for refining and adding nuanced value.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S resumes online orders after cyberattack

Marks & Spencer has resumed online clothing orders following a 46-day pause triggered by a cyberattack. The retailer restarted standard home delivery across England, Scotland and Wales, focusing initially on best-selling and new items instead of the full range.

A spokesperson stated that additional products will be added daily, enabling customers to gradually access a wider selection. Services such as click and collect, next-day delivery, and international orders are expected to be reintroduced in the coming weeks, while deliveries to Northern Ireland will resume soon.

The disruption began on 25 April when M&S halted clothing and home orders after issues with contactless payments and app services during the Easter weekend. The company revealed that the breach was caused by hackers who deceived staff at a third-party contractor, bypassing security defences.

M&S had warned that the incident could reduce its 2025/26 operating profit by around £300 million, though it aims to limit losses through insurance and internal cost measures. Shares rose 3 per cent as the online service came back online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump Executive Order revises US cyber policy and sanctions scope

US President Donald J. Trump signed a new Executive Order (EO) aimed at amending existing federal cybersecurity policies. The EO modifies selected provisions of previous executive orders signed by former Presidents Barack Obama and Joe Biden, introducing updates to sanctions policy, digital identity initiatives, and secure technology practices.

One of the main changes involves narrowing the scope of sanctions related to malicious cyber activity. The new EO limits the applicability of such sanctions to foreign individuals or entities involved in cyberattacks against US critical infrastructure. It also states that sanctions do not apply to election-related activities, though this clarification is included in a White House fact sheet rather than the EO text itself.

The order revokes provisions from the Biden-era EO that proposed expanding the use of federal digital identity documents, including mobile driver’s licenses. According to the fact sheet, this revocation is based on concerns regarding implementation and potential for misuse. Some analysts have expressed concerns about the implications of this reversal on broader digital identity strategies.

In addition to these policy revisions, the EO outlines technical measures to strengthen cybersecurity capabilities across federal agencies. These include:

  • Developing new encryption standards to prepare for advances in quantum computing, with implementation targets set for 2030.
  • Directing the National Security Agency (NSA) and Office of Management and Budget (OMB) to issue updated federal encryption requirements.
  • Refocusing artificial intelligence (AI) and cybersecurity initiatives on identifying and mitigating vulnerabilities.
  • Assigning the National Institute of Standards and Technology (NIST) responsibility for updating and guiding secure software development practices. This includes the establishment of an industry consortium and a preliminary update to its secure software development framework.

The EO also includes provisions for improving vulnerability tracking and mitigation in AI systems, with coordination required among the Department of Defence, the Department of Homeland Security, and the Office of the Director of National Intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Milei cleared of ethics breach over LIBRA token post

Argentina’s Anti-Corruption Office has concluded that President Javier Milei did not violate ethics laws when he published a now-deleted post promoting the LIBRA memecoin. The agency stated the February post was made in a personal capacity and did not constitute an official act.

The ruling clarified that Milei’s X account, where the post appeared, is personally managed and predates his political role. It added that the account identifies him as an economist rather than a public official, meaning the post is protected as a private expression under the constitution.

The investigation had been launched after LIBRA’s price soared and then crashed following Milei’s endorsement, which linked to the token’s contract and a promotional site. Investors reportedly lost millions, and allegations of insider trading surfaced.

Although the Anti-Corruption Office cleared him, a separate federal court investigation remains ongoing, with Milei and his sister’s assets temporarily frozen.

Despite the resolution, the scandal damaged public trust. Milei has maintained he acted in good faith, claiming the aim was to raise awareness of a private initiative to support small Argentine businesses through crypto.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!