Meta cuts 600 AI roles even as it expands superintelligence lab

Meta Platforms confirmed today it will cut approximately 600 jobs from its AI division, affecting teams including the Fundamental AI Research (FAIR) unit and product and infrastructure units. The move comes even as the company continues hiring for its elite superintelligence unit, the TBD Lab, which remains unaffected by the cuts.

According to an internal memo from Chief AI Officer Alexandr Wang, the layoff aim is to make remaining teams more load-bearing and impactful. ‘By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,’ Wang wrote.

Meta says employees affected will be encouraged to apply for other roles within the company; many are expected to be reassigned. The company’s earlier hiring spree in AI included poaching top talent from competitors and investing heavily in infrastructure. Analysts say the current cuts reflect a strategic pivot rather than a retreat, from broad AI research to more focused, high-impact model development.

This shift comes as Meta competes with organisations like OpenAI and Google in the race to build advanced large-language models and scaled AI systems. By trimming staff in legacy research and infrastructure units while bolstering resources for its superintelligence arm, Meta appears to be doubling-down on frontier AI even as it seeks to streamline operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI to improve forecasts and early warnings worldwide

The World Meteorological Organisation has highlighted the potential of AI to improve weather forecasts and early warning systems. The organisation urged the public, private, and academic sectors to use AI and machine learning to protect communities from extreme heat and rainfall.

The Extraordinary World Meteorological Congress approved resolutions to speed up Early Warnings for All, targeting universal coverage by 2027. AI will support, not replace, traditional forecasting, providing national meteorological services with ethical, transparent, and open-source tools.

Pilot projects, including a collaboration between Norway and Malawi, have already improved local forecasts.

Congress stressed helping low- and middle-income countries, least developed countries, and small island states access AI technology. WIPPS will use AI to provide advanced forecasts for better preparation against extreme weather and environmental events.

Congress also advanced the Global Greenhouse Gas Watch, WMO’s first Youth Action Plan, and reforms to boost efficiency amid financial constraints. The WMO continues underlining its essential role in resilient development, disaster risk reduction, and global economic stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Studio introduces new vibe coding experience

Google has unveiled a redesigned AI-powered vibe coding experience in AI Studio, aimed at helping users turn ideas into working AI apps within minutes. The update eliminates managing API keys and connecting models, making app creation quicker and easier.

With the new workflow, users can describe the app they want, and AI Studio, powered by Google’s Gemini models, automatically links the right APIs and tools. Developers and non-coders can quickly build videos and images or write AI apps.

AI Studio also introduces a revamped App Gallery and Brainstorming Loading Screen to spark inspiration during app development. Users can explore project ideas, preview starter code, and remix apps, while real-time suggestions appear as their app builds.

Annotation Mode allows intuitive visual editing, letting users highlight elements and instruct Gemini to modify them.

Additional updates ensure uninterrupted development by allowing users to add API keys once free quotas are exhausted. These improvements empower creators and lower barriers, making turning AI-driven ideas into functional applications easier than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google expands Earth AI for disaster response and environmental monitoring

The US tech giant, Google, has expanded access to Earth AI, a platform built on decades of geospatial modelling combined with Gemini’s advanced reasoning.

Enterprises, cities, and nonprofits can now rapidly analyse environmental and disaster-related data, enabling faster, informed decisions to protect communities.

During the 2025 California wildfires, Google’s AI helped alert millions and guide them to safety, showing the potential of Earth AI in crisis response.

A key feature, Geospatial Reasoning, allows the AI to connect multiple models (such as satellite imagery, population maps, and weather forecasts) to assess which communities and infrastructure are most at risk.

Instead of manual data analysis, organisations can now identify vulnerable areas and prioritise relief efforts in minutes.

Earth AI now includes tools to detect patterns in satellite imagery, such as drying rivers, harmful algae blooms, or vegetation encroachment on infrastructure. These insights support environmental monitoring and early warnings, letting authorities respond before disasters escalate.

The models are available on Google Cloud to Trusted Testers, allowing integration with external datasets for tailored analysis.

Several organisations are already leveraging Earth AI for the public good. WHO AFRO uses it to monitor cholera risks in the Democratic Republic of Congo, while Planet and Airbus analyse satellite imagery for deforestation and power line safety.

Bellwether uses Earth AI for hurricane prediction, enabling faster insurance claim processing and recovery. Google aims to make these tools broadly accessible to support global crisis management, public health, and environmental protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia rules out AI copyright exemption

The Albanese Government has confirmed that it will not introduce a Text and Data Mining Exception in Australia’s copyright law, reinforcing its commitment to protecting local creators.

The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.

Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.

The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.

The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA boosts open-source robotics with new ROS 2 and Physical AI contributions

At the ROSCon conference in Singapore, NVIDIA unveiled significant open-source contributions to accelerate the future of robotics.

The company announced updates to the ROS 2 framework, new partnerships within the Open Source Robotics Alliance, and the latest release of NVIDIA Isaac ROS 4.0 (all designed to strengthen collaboration in robotics development).

NVIDIA’s involvement in the new Physical AI Special Interest Group aims to enhance real-time robot control and AI processing efficiency.

Its integration of GPU-aware abstractions into ROS 2 allows the framework to handle both CPUs and GPUs seamlessly, ensuring faster and more consistent performance for robotic systems.

Additionally, the company open-sourced Greenwave Monitor, which helps developers quickly identify and fix performance bottlenecks. NVIDIA Isaac ROS 4.0, now available on the Jetson Thor platform, provides GPU-accelerated AI models and libraries to power robot mobility and manipulation.

Global robotics leaders, including AgileX, Canonical, Intrinsic, and Robotec.ai, are already deploying NVIDIA’s open-source tools to enhance simulation, digital twins, and real-world testing.

NVIDIA’s initiatives reinforce its role as a core contributor to the open-source robotics ecosystem and the development of physical AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI market surge raises alarm over financial stability

AI has become one of the dominant forces in global markets, with AI-linked firms now making up around 44% of the S&P 500’s market capitalisation. Their soaring valuations have pushed US stock indices near levels last seen in the dot com bubble.

While optimism remains high, the future is uncertain. AI’s infrastructure demands are immense, with estimates suggesting that trillions of dollars will be needed to build and power new data centres by 2030.

Much of this investment is expected to be financed through debt, increasing exposure to potential market shocks. Analysts warn that any slowdown in AI progress or monetisation could trigger sharp corrections in AI-related asset prices.

The Bank of England has noted that financial stability risks could rise if AI infrastructure expansion continues at its current pace. Banks and private credit funds may face growing exposure to highly leveraged sectors, while power and commodity markets could also come under strain from surging AI energy needs.

Although AI remains a powerful growth driver for the US economy, its rapid expansion is creating new systemic vulnerabilities. Policymakers and financial institutions are urged to monitor the sector closely as the next phase of AI-driven growth unfolds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot