EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Moltbook founders join Meta’s AI research lab

Meta Platforms has acquired Moltbook, a social networking platform designed for AI agents. The deal brings co-founders Matt Schlicht and Ben Parr into Meta’s AI research division, the Superintelligence Labs, led by Alexandr Wang.

Financial terms of the acquisition were not disclosed, and the founders are expected to start on 16 March.

Moltbook, launched in January, allows AI-powered bots to exchange code and interact socially in a Reddit-like environment. The platform has sparked debate on AI autonomy and real-world capabilities, highlighting growing competition among tech giants for AI talent and technology.

Industry figures have offered differing views on the platform’s significance. OpenAI CEO Sam Altman called Moltbook a potential fad but acknowledged its underlying technology hints at the future of AI agents.

Meanwhile, Anthropic’s chief product officer, Mike Krieger, noted that most users are not ready to grant AI full autonomy over their systems.

The platform’s growth also highlighted security risks. Cybersecurity firm Wiz reported a vulnerability that exposed private messages, email addresses, and credentials, which was resolved after the owners were notified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds option to disable AI search in Google Photos

Users of Google Photos will now have greater control over how they search their images, after Google introduced a visible toggle that returns to the traditional search experience.

The update follows complaints about the AI-powered Ask Photos feature.

Ask Photos was designed to allow users to search for images using natural language queries rather than simple keywords. The tool aimed to make photo searches more flexible, enabling complex queries such as descriptions of people, events or locations captured in images.

However, some users reported that the AI system produced slower results and occasionally failed to locate images that the classic search had previously found more reliably.

Although an option to turn off the AI feature already existed, it was hidden within settings and often overlooked.

The new update introduces a visible switch directly on the search interface. Users can now easily alternate between the AI-powered search and the traditional search system depending on their preferences.

Google said improvements have also been made to the quality of common searches following user feedback. The company emphasised that search remains one of the most frequently used functions within Google Photos and that ongoing updates will continue to refine the experience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malicious npm package targets developers with Openclaw impersonation

Security researchers uncovered a malicious npm package impersonating an Openclaw AI installer, designed to infect developer machines with credential-stealing malware.

JFrog Security Research identified the attack in early March 2026 after the package appeared on the npm registry and was downloaded roughly 178 times.

The deceptive package mimics legitimate Openclaw tools and contains ordinary-looking JavaScript files and documentation. Hidden scripts run during installation, displaying a fake command-line interface and a fabricated system prompt that requests the user’s password.

Entering the password grants the malware elevated access and allows it to download an encrypted payload from a remote command server. Once installed, the payload deploys Ghostloader, a remote access trojan that persists on the system and communicates with attacker servers.

Researchers say the malware targets sensitive information, including saved passwords, browser cookies, SSH keys, and cryptocurrency wallet files. Developers are advised to remove the package immediately, rotate credentials, and install software only from verified sources.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital sovereignty in Asia moves beyond US versus non-US cloud debate

AI, cloud computing, and cross-border data flows have made questions about control and jurisdiction increasingly important for governments and businesses. In Asia, the debate around digital sovereignty often focuses on ‘US versus non-US cloud’ providers or data localisation.

Such simplifications miss the practical challenges organisations face when choosing hosting locations or training AI models while navigating diverse regulatory regimes.

At the same time, Asia’s digital economy is building its own regulatory foundations. In Vietnam and Indonesia, new rules such as Vietnam’s Decree 53 and Indonesia’s data protection framework show how governments are shaping data governance while still relying on global cloud and AI platforms. Most organisations across the region continue to operate using a mix of local, regional, and international providers.

Organisations must address key questions about data jurisdiction and workload mobility when risks change. They must also control who can access sensitive systems during incidents. Digital sovereignty is clearer when seen through three pillars: data sovereignty, technical sovereignty, and operational sovereignty.

Data sovereignty is about jurisdiction, not just data storage. As AI regulation expands, businesses need to know which authorities can access their data and how it may be used. Technical sovereignty is the ability to move or redesign systems as regulations or geopolitics shift. Multi-cloud and hybrid strategies help organisations remain adaptable.

Operational sovereignty focuses on governance and control. It addresses who can access systems, from where, and under what safeguards, thus linking sovereignty directly to cybersecurity and incident response.

For Asia-Pacific organisations, digital sovereignty should not be a simple procurement checklist. Instead, it should guide cloud and AI strategies from the start, ensuring legal clarity, technical flexibility, and operational trust as the digital landscape evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New York moves to ban chatbots from giving legal and medical advice

New York lawmakers are considering legislation that would ban AI chatbots from providing legal or medical advice. The bill aims to stop automated systems from impersonating licensed professionals such as doctors and lawyers.

The proposal would also require chatbot operators to clearly inform users that they are interacting with an AI system. Notices must be prominent, written in the same language as the chatbot, and use a readable font.

A key feature of the bill is a private right of action. However, this would allow users to file civil lawsuits against chatbot owners who violate the law, recovering damages and legal fees. Experts say this enforcement tool strengthens the rules and deters abuse.

Supporters of the legislation argue it protects New Yorkers’ safety, particularly minors. Other bills in the same package would regulate online platforms like Roblox and set standards for generative AI, synthetic content, and the handling of biometric data.

The bill’s author, state Senator Kristen Gonzalez, said AI innovation should not come at the expense of public safety. She pointed to recent cases where AI chatbots were linked to harmful outcomes for minors, highlighting the need for transparency and accountability.

If passed, the law would take effect 90 days after the governor signs it. Lawmakers hope it will balance innovation with user protection, ensuring AI tools are used responsibly and safely across the state.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tycoon 2FA phishing service disrupted in global cybercrime crackdown

Authorities have disrupted the Tycoon 2FA phishing-as-a-service (PhaaS) platform, which sent millions of phishing emails to organisations worldwide.

The operation, led by Microsoft, Europol, and several industry partners, targeted the infrastructure behind Tycoon 2FA, which enabled large-scale phishing campaigns against more than 500,000 organisations each month.

By mid-2025, Tycoon 2FA accounted for 62% of the phishing attempts blocked by Microsoft, with over 30 million malicious emails blocked in a single month. Experts link the platform to around 96,000 global victims since 2023, including 55,000 Microsoft customers.

Researchers from Resecurity found cybercriminals widely used the platform to impersonate legitimate users and gain unauthorised access to accounts such as Microsoft 365, Outlook and Gmail. The service relied on techniques such as URL rotation using open redirect vulnerabilities and the misuse of Cloudflare Workers to hide malicious infrastructure.

‘The author of Tycoon 2FA is actively updating the tool with regular kit updates,’ reads the report published by Resecurity. ‘What makes Tycoon 2FA so special is that the kit effectively combines multiple methods to deliver phishing at scale—from PDF attachments to QR codes.’

Authorities say taking the infrastructure offline disrupts a key pathway for account takeover attacks and prevents additional threats, such as data theft, ransomware, business email compromise, and financial fraud.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Edu launches at Clemson University for students and faculty

Clemson University has introduced ChatGPT Edu to its students, faculty, and staff, providing them free access to the secure, institutionally managed version of the AI platform.

The rollout is part of Clemson’s partnership with OpenAI. It forms part of the university’s broader AI Initiative, which aims to develop a human-centred approach to AI across education, research, and operations.

University officials said the ChatGPT Edu environment will expand access to generative AI tools while ensuring institutional data remains protected and is not used to train external AI systems.

Members of the Clemson community who want to use the platform must request access through a ChatGPT Edu account request form. Once approved, accounts are automatically created, and users can sign in through Clemson’s single sign-on system.

Even if students or staff members already have a ChatGPT account linked to their Clemson email, they will still need to request access to ChatGPT Edu. After approval, they can merge your current account or download your chat history before creating a new one.

The university said the launch reflects its view that access to emerging technologies should be paired with clear guidance and responsible use. Users are advised to review Clemson’s updated AI guidelines before using the system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CEOs track new metric in AI workforce shift

Executives across the US are increasingly using a metric known as labour cost margin to evaluate workforce needs in the AI era. Business leaders in the US say the measure reflects how companies balance human labour with expanding technology investments.

A KPMG survey of 100 US CEOs shows strong corporate commitment to AI spending. Nearly 80 percent of executives allocate at least five percent of capital budgets to AI projects.

The workforce impact remains uncertain despite growing investment. Many executives expect AI to change job composition rather than eliminate roles.

Companies are hiring new technology-focused roles, including AI strategists and workflow coordinators. Analysts say repetitive office tasks in the US may face the greatest risk from automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot