A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.
The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.
The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.
The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.
An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.
The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.
It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.
The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.
An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.
A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.
The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.
However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.
Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.
Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.
The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.
As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s eSafety Commissioner has released an update on rules requiring platforms to prevent users under 16 from holding accounts. Early results show significant action by companies, but also ongoing challenges in fully enforcing the restrictions.
By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks.
Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.
Investigations into five major platforms are ongoing to determine whether they have taken reasonable steps to meet their legal obligations. Authorities are assessing systems and processes rather than individual accounts, with enforcement decisions expected by mid-2026.
A new legislative rule introduced in March 2026 targets platform features linked to potential harm, such as recommender systems and continuous content feeds. Regulators will continue working with industry while gathering evidence and maintaining transparency during the enforcement process.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.
It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.
Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.
By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube has expanded its conversational AI tool to smart TVs, marking a significant step in making home viewing more interactive. Viewers can now engage with content directly from their television screens using voice-enabled queries.
Access to the feature is simple. While watching a video, users can select the ‘Ask’ option and activate their remote’s microphone button to interact with the AI. Users can ask about similar content or a creator’s catalogue in real time, with prompts available to guide new users.
Initial rollout of the tool took place last year across mobile and web platforms, where it quickly became a practical companion for deeper content engagement. Users already use it to analyse podcasts, explore destinations, and understand content without pausing videos.
Expansion to smart TVs strengthens YouTube’s push to transform passive viewing into an interactive experience. Living room entertainment is increasingly shaped by AI-driven features, with real-time assistance now integrated directly into the home’s largest screen.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.
A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.
The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.
An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.
The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has pledged up to £20 million to boost the creative technology sector in the Tay Cities Region. The investment aims to support innovation in areas such as video games and virtual reality while driving economic growth.
Funding will help develop local talent and accelerate projects from early research to commercial products. The initiative focuses on strengthening collaboration between businesses, researchers and public bodies to expand opportunities across the region.
Centred around Dundee and the surrounding areas, the programme will build on an established reputation in digital industries. Universities and industry partners are expected to play a key role in delivering research, training and access to investment networks.
UK officials say the move will create jobs and open new markets, while supporting emerging applications in sectors including healthcare and education. The funding forms part of a wider national strategy to strengthen innovation and regional economies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Italian Data Protection Authority has imposed a €31.8 million fine on Intesa Sanpaolo following serious shortcomings in its handling of personal data.
The case stems from unauthorised access by an employee to thousands of customer accounts, raising concerns about internal oversight and data protection safeguards.
Investigations revealed that monitoring systems failed to detect repeated unjustified access to sensitive financial information over an extended period. The breach also involved high-risk individuals, highlighting weaknesses in risk-based controls instead of robust, targeted protection measures.
Authorities in Italy identified violations of core data protection principles, including integrity, confidentiality and accountability. Additional concerns arose from delays in notifying both regulators and affected individuals, limiting the ability to respond effectively to the incident.
The case of Intesa Sanpaolo underscores increasing regulatory scrutiny of data governance practices in the financial sector. Strengthening internal controls and ensuring timely breach reporting remain essential for maintaining trust and compliance in data-driven banking environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The British Embassy in Manila and the Philippines’ Department of Education have expanded cooperation to advance EdTech and digital learning, focusing on inclusive and evidence-based approaches instead of fragmented implementation.
A partnership that aims to strengthen foundational learning while supporting long-term resilience in the education system.
Support is being delivered through EdTech Hub, with initiatives centred on developing a National EdTech Policy, improving responses to climate-related disruptions, and expanding the use of AI in education administration.
The programme includes pilot projects and evaluation frameworks designed to ensure technology adoption remains effective, scalable, and responsive to local needs.
A key component involves participation in global AI initiatives, including an observatory and challenge programme to build institutional capacity and encourage experimentation.
These efforts seek to enhance efficiency in education systems while supporting innovation in teaching and learning environments, particularly in areas affected by environmental and structural challenges.
The collaboration between the UK and the Philippines reflects a broader commitment to digital transformation in education across Southeast Asia, aiming to ensure equitable access to learning opportunities.
By combining research, policy development, and technological innovation, both sides seek to prepare students and institutions for evolving demands while maintaining a focus on inclusion and long-term sustainability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!