Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands AI infrastructure with sustainable data centre in El Paso

The US tech giant, Meta, has begun construction on a new AI-optimised data centre in El Paso, Texas, designed to scale up to 1GW and power the company’s expanding AI ambitions.

The 29th in Meta’s global network, the site will support the next generation of AI models, underpinning technologies such as smart glasses, AI assistants, and real-time translation tools.

A data centre project that represents a major investment in both technology and the local community, contributing over $1.5 billion and creating about 1,800 construction jobs and 100 operational roles in its first phase.

Meta’s Community Accelerator programme will also help local businesses build digital and AI skills, while Community Action Grants are set to launch in El Paso next year.

Environmental sustainability remains central to the development. The data centre will operate on 100% renewable energy, with Meta covering the costs of new grid connections through El Paso Electric.

Using a closed-loop cooling system, the facility will consume no water for most of the year, aligning with Meta’s target to be water positive by 2030. The company plans to restore twice the amount of water used to local watersheds through partnerships with DigDeep and the Texas Water Action Collaborative.

The El Paso project, Meta’s third in Texas, underscores its long-term commitment to sustainable AI infrastructure. By combining efficiency, clean energy, and community investment, Meta aims to build the foundations for a responsible and scalable AI-driven future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta may bring Reels to the big screen with Instagram TV

Instagram is reportedly exploring plans to launch a dedicated TV app aimed at expanding its video reach across larger screens.

The move was revealed by CEO Adam Mosseri at the Bloomberg Screentime conference in Los Angeles, where he said that as consumption behaviour shifts toward TV, Instagram must follow.

Mosseri clarified that there’s no official launch yet, but that the company is actively considering how to present Instagram content, especially Reels, on TV devices in a compelling way.

He also ruled out plans to license live sports or Hollywood content for the TV app, emphasising Instagram would carry over its existing focus on short-form and vertical video rather than pivoting fully into full-length entertainment.

The proposed TV app would deepen Instagram’s stake in the video space and help it compete more directly with YouTube, TikTok and other video platforms, especially as users increasingly watch video content in living rooms.

However, translating vertical video formats like Reels to a horizontal, large-screen environment poses design, UX and monetisation challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta unveils Candle cable to boost Asia-Pacific connectivity

Meta has announced Candle, a new submarine cable system designed to enhance digital connectivity across East and Southeast Asia. The 8,000-kilometre network will link Japan, Taiwan, the Philippines, Indonesia, Malaysia, and Singapore by 2028, offering a record 570 terabits per second (Tbps) of capacity.

Developed with regional telecommunications partners, Candle will use advanced 24 fibre-pair technology to deliver Meta’s largest bandwidth performance in the Asia-Pacific region.

The company also confirmed progress on several other subsea infrastructure projects. The Bifrost cable now connects Singapore, Indonesia, the Philippines, and the United States, with Mexico expected to join by 2026, adding 260 Tbps of new capacity.

Meanwhile, Echo currently links Guam and California with the same bandwidth, and Apricot has gone live between Japan, Taiwan, and Guam, with future extensions planned to Southeast Asia.

Together, Candle, Bifrost, Echo, and Apricot will improve intra-Asian connectivity and strengthen digital bridges between Asia and the Americas. These projects are part of Meta’s global network investments, including Project Waterworth and 2Africa, aimed at expanding access to AI and digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use AI interactions for content and ad recommendations

Meta has announced that beginning 16 December 2025, it will start personalising content and ad recommendations on Facebook, Instagram and other apps using users’ interactions with its generative AI features.

The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.

Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.

If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!