High-profile AI acquisition puts Manus back in focus

Manus has returned to the spotlight after agreeing to be acquired by Meta in a deal reportedly worth more than $2 billion. The transaction is one of the most high-profile acquisitions of an Asian AI startup by a US technology company and reflects Meta’s push to expand agentic AI capabilities across its platforms.

The startup drew attention in March after unveiling an autonomous AI agent designed to execute tasks such as résumé screening and stock analysis. Founded in China, Manus later moved its headquarters to Singapore and was developed by the AI product studio Butterfly Effect.

Since launch, Manus has expanded its features to include design work, slide creation, and browser-based task completion. The company reported surpassing $100 million in annual recurring revenue and raised $75 million earlier this year at a valuation of about $500 million.

Meta said the acquisition would allow it to integrate the Singapore-based company’s technology into its wider AI strategy while keeping the product running as a standalone service. Manus said subscriptions would continue uninterrupted and that operations would remain based in Singapore.

The deal has drawn political scrutiny in the US due to Manus’s origins and past links to China. Meta said the transaction would sever remaining ties to China, as debate intensifies over investment, data security, and competition in advanced AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scam texts impersonating Illinois traffic authorities spread

Illinois Secretary of State Alexi Giannoulias has warned residents to stay alert for fraudulent text messages claiming unpaid traffic violations or tolls. Officials say the messages are part of a phishing campaign targeting Illinois drivers.

The scam texts typically warn recipients that their vehicle registration or driving privileges are at risk of suspension. The messages urge immediate action via links that steal money or personal information.

The Secretary of State’s office said it sends text messages only to remind customers about scheduled DMV appointments. It does not communicate by text about licence status, vehicle registration issues, or enforcement actions.

Officials advised residents not to click on links or provide personal details in response to such messages. The texts are intended to create fear and pressure victims into acting quickly.

Residents who receive scam messages are encouraged to report them to the Federal Trade Commission through its online fraud reporting system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China proposes strict AI rules to protect children

China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.

The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.

High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.

The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.

The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.

China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lawsuit against Roskomnadzor over WhatsApp and Telegram calls dismissed

A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.

The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.

The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.

Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.

Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Coupang faces backlash over voucher compensation after data breach

South Korean e-commerce firm Coupang has apologised for a major data breach affecting more than 33 million users and announced a compensation package worth 1.69 trillion won. Founder Kim Bom acknowledged the disruption caused, following public and political backlash over the incident.

Under the plan, affected customers will receive vouchers worth 50,000 won, usable Choi Minonly on Coupang’s own platforms. The company said the measure was intended to compensate users, but the approach has drawn criticism from lawmakers and consumer groups.

Choi Min-hee, a lawmaker from the ruling Democratic Party, criticised the decision in a social media post, arguing that the vouchers were tied to services with limited use. She accused Coupang of attempting to turn the crisis into a business opportunity.

Consumer advocacy groups echoed these concerns, saying the compensation plan trivialised the seriousness of the breach. They argued that limiting compensation to vouchers resembled a marketing strategy rather than meaningful restitution for affected users.

The controversy comes as the National Assembly of South Korea prepares to hold hearings on Coupang. While the company has admitted negligence, it has declined to appear before lawmakers amid scrutiny of its handling of the breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop dominates YouTube recommendations for new users

More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.

Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.

These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.

To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.

Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital rules dispute deepens as US administration avoids trade retaliation

The US administration is criticising foreign digital regulations affecting major online platforms while avoiding trade measures that could disrupt the US economy. Officials say the rules disproportionately impact American technology companies.

US officials have paused or cancelled trade discussions with the UK, the EU, and South Korea. Current negotiations are focused on rolling back digital taxes, privacy rules, and platform regulations that Washington views as unfair barriers to US firms.

US administration officials describe the moves as a negotiating tactic rather than an escalation toward tariffs. While trade investigations into digital practices have been raised as a possibility, officials have stressed that the goal remains a negotiated outcome rather than a renewed trade conflict.

Technology companies have pressed for firmer action, though some industry figures warn that aggressive retaliation could trigger a wider digital trade war. Officials acknowledge that prolonged disputes with major partners could ultimately harm both US firms and global markets.

Despite rhetorical escalation and targeted threats against European companies, the US administration has so far avoided dismantling existing trade agreements. Analysts say mounting pressure may soon force Washington to choose between compromise and more concrete enforcement measures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!