Manus acquisition signals Meta’s continued AI expansion

Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as part of its continued push to expand artificial intelligence capabilities. The deal underscores Meta’s strategy of acquiring specialised AI firms to accelerate product development.

Manus, founded in China before relocating to Singapore, develops AI agents capable of performing tasks such as market research, coding, and data analysis. The company said it reached more than $100 million in annualised revenue within eight months of launch and was serving millions of users worldwide.

Meta said the acquisition will help integrate advanced automation into its consumer and enterprise offerings, including the Meta AI assistant. Manus will continue operating its subscription service, and its employees will join Meta’s teams.

Financial terms were not disclosed, but media reports valued the deal at more than $2 billion. Manus had been seeking funding at a similar valuation before being approached by Meta and had recently raised capital from international investors.

The acquisition follows a series of AI-focused deals by Meta, including investments in Scale AI and AI device start-ups. Analysts say the move highlights intensifying competition among major technology firms to secure AI talent and capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK App Store antitrust case escalates as Apple appeals

Apple has filed an appeal of a major UK antitrust ruling that could result in billions of dollars in compensation for App Store users. The move would escalate the case from the Competition Appeal Tribunal to the UK Court of Appeal.

The application follows an October ruling in which the tribunal found Apple had abused its dominant market position by charging excessive App Store fees. The decision set a £1.5 billion ($1.9 billion) compensation figure, which Apple previously signalled it would challenge.

After the tribunal declined to grant permission to appeal, Apple sought to appeal to a higher court. The company has not commented publicly on the latest filing but continues to dispute the tribunal’s assessment of competition in the app economy.

Central to the case is the tribunal’s proposed developer commission rate of 15-20 per cent, lower than Apple’s longstanding 30 per cent fee. The rate was determined using what the court described as informed estimates.

If upheld, the compensation would be distributed among UK App Store users who made purchases between 2015 and 2024. The case is being closely watched as a test of antitrust enforcement against major digital platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hacker allegedly claims a major WIRED data breach affecting 2.3 million

A hacker using the name Lovely has allegedly claimed to have accessed subscriber data belonging to WIRED and to have leaked details relating to around 2.3 million users.

The same individual also states that a wider Condé Nast account system covering more than 40 million users could be exposed in future leaks instead of ending with the current dataset.

Security researchers are reported to have matched samples of the claimed leak with other compromised data sources. The information is said to include names, email addresses, user IDs and timestamps instead of passwords or payment information.

Some researchers also believe that certain home addresses could be included, which would raise privacy concerns if verified.

The dataset is reported to be listed on Have I Been Pwned. However, no official confirmation from WIRED or Condé Nast has been issued regarding the authenticity, scale or origin of the claimed breach, and the company’s internal findings remain unknown until now.

The hacker has also accused Condé Nast of failing to respond to earlier security warnings, although these claims have not been independently verified.

Users are being urged by security professionals to treat unexpected emails with caution instead of assuming every message is genuine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

KT faces action in South Korea after a femtocell security breach impacts users

South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.

Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.

More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.

Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.

The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.

Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.

A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.

The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI strengthened ChatGPT Atlas with new protections against prompt injection attacks

Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.

The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.

Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.

Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.

OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.

When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.

These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.

The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots struggle with dialect fairness

Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.

A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.

Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.

Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.

Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.

Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SoftBank invests $4 billion in global AI networks

SoftBank Group has agreed to acquire DigitalBridge for $4 billion, strengthening its global digital infrastructure capabilities. The move aims to scale data centres, connectivity, and edge networks to support next-generation AI services.

The acquisition aligns with SoftBank’s mission to develop Artificial Super Intelligence (ASI), providing the compute power and connectivity needed to deploy AI at scale.

DigitalBridge’s global portfolio of data centres, cell towers, fibre networks, and edge infrastructure will enhance SoftBank’s ability to finance and operate these assets worldwide.

DigitalBridge will continue to operate independently under CEO Marc Ganzi. The transaction, valued at a 15% premium to its closing share price, is expected to close in the second half of 2026, pending regulatory approval.

SoftBank and DigitalBridge anticipate that the combined resources will accelerate investments in AI infrastructure, supporting the rapid growth of technology companies and fostering the development of advanced AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New SIM cards in South Korea now require real-time facial recognition

South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.

The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.

Existing users are not affected, and the requirement applies only at the moment a number is issued.

The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.

Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.

Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.

They also question whether such a broad rule is proportionate when mobile access is essential for daily life.

The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.

A rule that has become a test case for how far governments should extend biometric identity checks into routine services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!