Meta pushes back on EU AI framework

Meta has refused to endorse the European Union’s new voluntary Code of Practice for general-purpose AI, citing legal overreach and risks to innovation.

The company warns that the framework could slow development and deter investment by imposing expectations beyond upcoming AI laws.

In a LinkedIn post, Joel Kaplan, Meta’s chief global affairs officer, called the code confusing and burdensome, criticising its requirements for reporting, risk assessments and data transparency.

He argued that such rules could limit the open release of AI models and harm Europe’s competitiveness in the field.

The code, published by the European Commission, is intended to help companies prepare for the binding AI Act, set to take effect from August 2025. It encourages firms to adopt best practices on safety and ethics while building and deploying general-purpose AI systems.

While firms like Microsoft are expected to sign on, Meta’s refusal could influence other developers to resist what they view as Brussels overstepping. The move highlights ongoing friction between Big Tech and regulators as global efforts to govern AI rapidly evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI strategy aims to attract global capital to Indonesia

Indonesia is moving to cement its position in the global AI and semiconductor landscape by releasing its first comprehensive national AI strategy in August 2025.

Deputy Minister Nezar Patria says the roadmap aims to clarify the country’s AI market potential, particularly in sectors like health and agriculture, and provide guidance on infrastructure, regulation, and investment pathways.

Already, global tech firms are demonstrating confidence in the country’s potential. Microsoft has pledged $1.7 billion to expand cloud and AI capabilities, while Nvidia partnered on a $200 million AI centre project. These investments align with Jakarta’s efforts to build skill pipelines and computational capacity.

In parallel, Indonesia is pitching into critical minerals extraction to strengthen its semiconductor and AI hardware supply chains, and has invited foreign partners, including from the United States, to invest. These initiatives aim to align resource security with its AI ambitions.

However, analysts caution that Indonesia must still address significant gaps: limited AI-ready infrastructure, a shortfall in skilled tech talent, and governance concerns such as data privacy and IP protection.

The new AI roadmap will bridge these deficits and streamline regulation without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI pact between Sri Lanka and Singapore fosters innovation

Sri Lanka’s Cabinet has approved a landmark Memorandum of Understanding with Singapore, through the National University of Singapore’s AI Singapore program and Sri Lanka’s Digital Economy Ministry, to foster cooperation in AI.

The MoU establishes a framework for joint research, curriculum development, and knowledge-sharing initiatives to address local priorities and global tech challenges.

This collaboration signals a strategic leap in Sri Lanka’s digital transformation journey. It emerged during Asia Tech x Singapore 2025, where officials outlined plans for AI training, policy alignment, digital infrastructure support, and e‑governance development.

The partnership builds on Sri Lanka’s broader agenda, including fintech innovation and cybersecurity, to strengthen its national AI ecosystem.

With the formalisation of this MoU, Sri Lanka hopes to elevate its regional and global AI standing. The initiative aims to empower local researchers, cultivate tech talent, and ensure that AI governance and innovation are aligned with ethical and economic goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and OpenAI deepen AI collaboration on security and public services

OpenAI has signed a strategic partnership with the UK government aimed at strengthening AI security research and exploring national infrastructure investment.

The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.

Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.

According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.

He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered app revolutionizes blight prevention

Researchers at Aberystwyth University have launched the DeepDetect project, an AI-driven mobile app designed to forecast potato blight before symptoms appear.

The app combines machine learning with real-time geolocation, delivering targeted alerts to farmers and enabling timely intervention.

Blight, caused by Phytophthora infestans, is a significant agricultural threat, accounting for about 20% of global potato yield losses and costing £3.5 billion annually.

Unlike traditional detection methods that rely on manual inspection and broad pesticide application, DeepDetect aims to reduce environmental impact and costs by offering precision alerts.

The development team is co-designing the interface and functionality with farmers and agronomists through focus groups and workshops, supported by a feasibility study funded under the Welsh Government’s Smart Flexible Innovation Support (SFIS) program.

The goal is to build a national early-warning system that could extend to other crops and regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Replit revamps data architecture following live database deletion

Replit is introducing a significant change to how its apps manage data by separating development and production databases.

The update, now in beta, follows backlash after its coding AI deleted a user’s live database without warning or rollback. Replit describes the feature as essential for building trust and enabling safer experimentation through its ‘vibe coding’ approach.

Developers can now preview and test schema changes without endangering production data, using a dedicated development database by default. The incident that prompted the shift involved SaaStr.

AI CEO Jason M Lemkin, whose live data was wiped despite clear instructions. Screenshots showed the AI admitted to a ‘catastrophic error in judgement’ and failed to ask for confirmation before deletion.

Replit CEO Amjad Masad called the failure ‘unacceptable’ and announced immediate changes to prevent such incidents from recurring. Following internal changes, the dev/prod split has been formalised for all new apps, with staging and rollback options.

Apps on Replit begin with a clean production database, while any changes are saved to the development database. Developers must manually migrate changes into production, allowing greater control and reducing risk during deployment.

Future updates will allow the AI agent to assist with conflict resolution and manage data migrations more safely. Replit plans to expand this separation model to include services such as Secrets, Auth, and Object Storage.

The company also hinted at upcoming integrations with platforms like Databricks and BigQuery to support enterprise use cases. Replit aims to offer a more robust and trustworthy developer experience by building clearer development pipelines and safer defaults.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT evolves from chatbot to digital co-worker

OpenAI has launched a powerful multi-function agent inside ChatGPT, transforming the platform from a conversational AI into a dynamic digital assistant capable of executing multi-step tasks.

Rather than waiting for repeated commands, the agent acts independently — scheduling meetings, drafting emails, summarising documents, and managing workflows with minimal input.

The development marks a shift in how users interact with AI. Instead of merely assisting, ChatGPT now understands broader intent, remembers context, and completes tasks autonomously.

Professionals and individuals using ChatGPT online can now treat the system as a digital co-worker, helping automate complex tasks without bouncing between different tools.

The integration reflects OpenAI’s long-term vision of building AI that aligns with real-world needs. Compared to single-purpose tools like GPTZero or NoteGPT, the ChatGPT agent analyses, summarises, and initiates next steps.

It’s part of a broader trend, where AI is no longer just a support tool but a full productivity engine.

For businesses adopting ChatGPT professional accounts, the rollout offers immediate value. It reduces manual effort, streamlines enterprise operations, and adapts to user habits over time.

As AI continues to embed itself into company infrastructure, the new agent from OpenAI signals a future where human–AI collaboration becomes the norm, not the exception.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix uses AI to boost creativity and cut costs

Netflix co-CEO Ted Sarandos has said generative AI is used to boost creativity, not just reduce production costs. A key example was seen in the Argentine series El Eternauta, where AI helped complete complex visual effects far quicker than traditional methods.

The streaming giant’s production team used AI to render a building collapse scene in Buenos Aires, completing the sequence ten times faster and more economically. Sarandos described the outcome as proof that AI supports real creators with better tools.

Netflix also applies generative AI in areas beyond filmmaking, including personalisation, search, and its advertising ecosystem. As part of these innovations, interactive adverts are expected to launch later in 2025.

During the second quarter, Netflix reported $11.1 billion in revenue and $3.1 billion in profit. Users streamed over 95 billion hours of content in the year’s first half, marking a slight rise from 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch publishers support ethical training of AI model

Dutch news publishers have partnered with research institute TNO to develop GPT-NL, a homegrown AI language model trained on legally obtained Dutch data.

The project marks the first time globally that private media outlets actively contribute content to shape a national AI system.

Over 30 national and regional publishers from NDP Nieuwsmedia and news agency ANP are sharing archived articles to double the volume of high-quality training material. The initiative aims to establish ethical standards in AI by ensuring copyright is respected and contributors are compensated.

GPT-NL is designed to support tasks such as summarisation and information extraction, and follows European legal frameworks like the AI Act. Strict safeguards will prevent content from being extracted or reused without authorisation when the model is released.

The model has access to over 20 billion Dutch-language tokens, offering a diverse and robust foundation for its training. It is a non-profit collaboration between TNO, NFI, and SURF, intended as a responsible alternative to large international AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance needs urgent international coordination

A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.

Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.

The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.

Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.

The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.

Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!