Meta to spend $600 billion on US data centres by 2028

Meta has plans to spend at least $600 billion on US data centres and AI infrastructure by 2028. The forecast, reported by The Information, was shared by CEO Mark Zuckerberg during a dinner with President Donald Trump and other technology leaders.

Capital expenditure is set to rise sharply over the next three years. Meta projects spending of $66–72 billion in 2025, nearly 70% higher than 2024, with another significant increase expected in 2026.

The company said the surge in investment will be driven primarily by the need to expand AI computing power.

Zuckerberg confirmed that Meta aims to deploy more than one million GPUs to train its next generation of AI models.

The company is also investing heavily in talent and infrastructure as it builds a dedicated team focused on developing artificial super intelligence, a concept referring to AI systems with capabilities beyond those of humans.

The spending commitment highlights how major US technology companies are racing to secure computing capacity for AI. Meta is pledging ‘hundreds of billions of dollars’ towards expanding its data centre footprint in the years ahead.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mistral AI pushes growth with new funding and global deals

Founded in 2023 by ex-Google DeepMind and Meta researchers, Mistral has quickly gained global attention with its open-source models and consumer app, which hit one million downloads within two weeks of launch.

Mistral AI is now seeking fresh funding at a reported $14 billion valuation, more than double its worth just a year ago. Its investors include Microsoft, Nvidia, Cisco, and Bpifrance, and it has signed partnerships with AFP, Stellantis, Orange, and France’s army.

Its growing suite of models spans large language, audio, coding, and reasoning systems, while its enterprise tools integrate with services such as Asana and Google Drive. French president Emmanuel Macron has openly endorsed the firm, framing it as a strategic alternative to US dominance in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore mandates Meta to tackle scams or risk $1 million penalty

In a landmark move, Singapore police have issued their first implementation directive under the Online Criminal Harms Act (OCHA) to tech giant Meta, requiring the company to tackle scam activity on Facebook or face fines of up to $1 million.

Announced on 3 September by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025, the directive targets scam advertisements, fake profiles, and impersonation of government officials, particularly Prime Minister Lawrence Wong and former Defence Minister Ng Eng Hen. The measure is part of Singapore’s intensified crackdown on government official impersonation scams (GOIS), which have surged in 2025.

According to mid-year police data, Gois cases nearly tripled to 1,762 in the first half of 2025, up from 589 in the same period last year. Financial losses reached $126.5 million, a 90% increase from 2024.
PM Wong previously warned the public about deepfake ads using his image to promote fraudulent cryptocurrency schemes and immigration services.

Meta responded that impersonation and deceptive ads violate its policies and are removed when detected. The company said it uses facial recognition to protect public figures and continues to invest in detection systems, trained reviewers, and user reporting tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Key AI researchers depart Apple for rivals Meta and OpenAI

Apple is confronting a significant exodus of AI talent, with key researchers departing for rival firms instead of advancing projects in-house.

The company lost its lead robotics researcher, Jian Zhang, to Meta’s Robotics Studio, alongside several core Foundation Models team members responsible for the Apple Intelligence platform. The brain drain has triggered internal concerns about Apple’s strategic direction and declining staff morale.

Instead of relying entirely on its own systems, Apple is reportedly considering a shift towards using external AI models. The departures include experts like Ruoming Pang, who accepted a multi-year package from Meta reportedly worth $200 million.

Other AI researchers are set to join leading firms like OpenAI and Anthropic, highlighting a fierce industry-wide battle for specialised expertise.

At the centre of the talent war is Meta CEO Mark Zuckerberg, offering lucrative packages worth up to $100 million to secure leading researchers for Meta’s ambitious AI and robotics initiatives.

The aggressive recruitment strategy is strengthening Meta’s capabilities while simultaneously weakening the internal development efforts of competitors like Apple.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Political backlash mounts as Meta revises AI safety policies

Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.

These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.

The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.

The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.

At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.

The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta faces turmoil as AI hiring spree backfires

Mark Zuckerberg’s ambitious plan to assemble a dream team of AI researchers at Meta has instead created internal instability.

High-profile recruits poached from rival firms have begun leaving within weeks of joining, citing cultural clashes and frustration with the company’s working style. Their departures have disrupted projects and unsettled long-time executives.

Meta had hoped its aggressive hiring spree would help the company rival OpenAI, Google, and Anthropic in developing advanced AI systems.

Instead of strengthening the company’s position, the strategy has led to delays in projects and uncertainty about whether Meta can deliver on its promises of achieving superintelligence.

The new arrivals were given extensive autonomy, fuelling tensions with existing teams and creating leadership friction. Some staff viewed the hires as destabilising, while others expressed concern about the direction of the AI division.

The resulting turnover has left Meta struggling to maintain momentum in its most critical area of research.

As Meta faces mounting pressure to demonstrate progress in AI, the setbacks highlight the difficulty of retaining elite talent in a fiercely competitive field.

Zuckerberg’s recruitment drive, rather than propelling Meta ahead, risks slowing down the company’s ability to compete at the highest level of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms under scrutiny for exposing children to harmful content

The National Association of Attorneys General has called on 13 AI firms, including OpenAI and Meta, to strengthen child protection measures. Authorities warned that AI chatbots have been exposing minors to sexually suggestive material, raising urgent safety concerns.

Growing use of AI tools among children has amplified worries. In the US, surveys show that over three-quarters of teenagers regularly interact with AI companions. The UK data indicates that half of online 8-15-year-olds have used generative AI in the past year.

Parents, schools, and children’s rights organisations are increasingly alarmed by potential risks such as grooming, bullying, and privacy breaches.

Meta faced scrutiny after leaked documents revealed its AI Assistants engaged in ‘flirty’ interactions with children, some as young as eight. The NAAG described the revelations as shocking and warned that other AI firms could pose similar threats.

Lawsuits against Google and Character.ai underscore the potential real-world consequences of sexualised AI interactions.

Officials insist that companies cannot justify policies that normalise sexualised behaviour with minors. Tennessee Attorney General Jonathan Skrmetti warned that such practices are a ‘plague’ and urged innovation to avoid harming children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!