Singapore mandates Meta to tackle scams or risk $1 million penalty

In a landmark move, Singapore police have issued their first implementation directive under the Online Criminal Harms Act (OCHA) to tech giant Meta, requiring the company to tackle scam activity on Facebook or face fines of up to $1 million.

Announced on 3 September by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025, the directive targets scam advertisements, fake profiles, and impersonation of government officials, particularly Prime Minister Lawrence Wong and former Defence Minister Ng Eng Hen. The measure is part of Singapore’s intensified crackdown on government official impersonation scams (GOIS), which have surged in 2025.

According to mid-year police data, Gois cases nearly tripled to 1,762 in the first half of 2025, up from 589 in the same period last year. Financial losses reached $126.5 million, a 90% increase from 2024.
PM Wong previously warned the public about deepfake ads using his image to promote fraudulent cryptocurrency schemes and immigration services.

Meta responded that impersonation and deceptive ads violate its policies and are removed when detected. The company said it uses facial recognition to protect public figures and continues to invest in detection systems, trained reviewers, and user reporting tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Key AI researchers depart Apple for rivals Meta and OpenAI

Apple is confronting a significant exodus of AI talent, with key researchers departing for rival firms instead of advancing projects in-house.

The company lost its lead robotics researcher, Jian Zhang, to Meta’s Robotics Studio, alongside several core Foundation Models team members responsible for the Apple Intelligence platform. The brain drain has triggered internal concerns about Apple’s strategic direction and declining staff morale.

Instead of relying entirely on its own systems, Apple is reportedly considering a shift towards using external AI models. The departures include experts like Ruoming Pang, who accepted a multi-year package from Meta reportedly worth $200 million.

Other AI researchers are set to join leading firms like OpenAI and Anthropic, highlighting a fierce industry-wide battle for specialised expertise.

At the centre of the talent war is Meta CEO Mark Zuckerberg, offering lucrative packages worth up to $100 million to secure leading researchers for Meta’s ambitious AI and robotics initiatives.

The aggressive recruitment strategy is strengthening Meta’s capabilities while simultaneously weakening the internal development efforts of competitors like Apple.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Political backlash mounts as Meta revises AI safety policies

Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.

These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.

The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.

The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.

At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.

The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta faces turmoil as AI hiring spree backfires

Mark Zuckerberg’s ambitious plan to assemble a dream team of AI researchers at Meta has instead created internal instability.

High-profile recruits poached from rival firms have begun leaving within weeks of joining, citing cultural clashes and frustration with the company’s working style. Their departures have disrupted projects and unsettled long-time executives.

Meta had hoped its aggressive hiring spree would help the company rival OpenAI, Google, and Anthropic in developing advanced AI systems.

Instead of strengthening the company’s position, the strategy has led to delays in projects and uncertainty about whether Meta can deliver on its promises of achieving superintelligence.

The new arrivals were given extensive autonomy, fuelling tensions with existing teams and creating leadership friction. Some staff viewed the hires as destabilising, while others expressed concern about the direction of the AI division.

The resulting turnover has left Meta struggling to maintain momentum in its most critical area of research.

As Meta faces mounting pressure to demonstrate progress in AI, the setbacks highlight the difficulty of retaining elite talent in a fiercely competitive field.

Zuckerberg’s recruitment drive, rather than propelling Meta ahead, risks slowing down the company’s ability to compete at the highest level of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms under scrutiny for exposing children to harmful content

The National Association of Attorneys General has called on 13 AI firms, including OpenAI and Meta, to strengthen child protection measures. Authorities warned that AI chatbots have been exposing minors to sexually suggestive material, raising urgent safety concerns.

Growing use of AI tools among children has amplified worries. In the US, surveys show that over three-quarters of teenagers regularly interact with AI companions. The UK data indicates that half of online 8-15-year-olds have used generative AI in the past year.

Parents, schools, and children’s rights organisations are increasingly alarmed by potential risks such as grooming, bullying, and privacy breaches.

Meta faced scrutiny after leaked documents revealed its AI Assistants engaged in ‘flirty’ interactions with children, some as young as eight. The NAAG described the revelations as shocking and warned that other AI firms could pose similar threats.

Lawsuits against Google and Character.ai underscore the potential real-world consequences of sexualised AI interactions.

Officials insist that companies cannot justify policies that normalise sexualised behaviour with minors. Tennessee Attorney General Jonathan Skrmetti warned that such practices are a ‘plague’ and urged innovation to avoid harming children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Senior OpenAI executive Julia Villagra departs amid talent war

OpenAI’s chief people officer, Julia Villagra, has left the company, marking the latest leadership change at the AI pioneer. Villagra, who joined the San Francisco firm in early 2024 and was promoted in March, previously led its human resources operations.

Her responsibilities will temporarily be overseen by chief strategy officer Jason Kwon, while chief applications officer Fidji Simo will lead the search for her successor.

OpenAI said Villagra is stepping away to pursue her personal interest in art, music and storytelling as tools to help people understand the shift towards artificial general intelligence, a stage when machines surpass human performance in most forms of work.

The departure comes as OpenAI navigates a period of intense competition for AI expertise. Microsoft-backed OpenAI is valued at about $300 billion, with a potential share sale set to raise that figure to $500 billion.

The company faces growing rivalry from Meta, where Mark Zuckerberg has reportedly offered $100 million signing bonuses to attract OpenAI talent.

While OpenAI expands, public concerns over the impact of AI on employment continue. A Reuters/Ipsos poll found 71% of Americans fear AI could permanently displace too many workers, despite the unemployment rate standing at 4.2% in July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court filing details Musk’s outreach to Zuckerberg over OpenAI bid

Elon Musk attempted to bring Meta chief executive Mark Zuckerberg into his consortium’s $97.4 billion bid for OpenAI earlier this year, the company disclosed in a court filing.

According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.

In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.

Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.

The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!