UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Parents underestimate how teenagers use AI in daily life

Parents often believe they understand how their children use AI tools in daily life, but recent studies suggest a clear and growing disconnect. Teenagers are using AI more frequently and in more complex ways than most adults realise.

Research indicates that 64% of teens use AI, while only 51% of parents think their children do. A large share of families have never discussed AI, leaving teenagers to navigate its role without guidance.

Teenagers commonly use AI for schoolwork, research and entertainment as part of their routine activities. However, a notable number also rely on it for advice, conversation and even emotional support in personal situations.

Experts warn that this awareness gap can increase risks linked to misuse and emotional dependence on AI tools. Limited parental understanding means many overlook how strongly AI is influencing behaviour and decision-making.

Despite these concerns, many teenagers feel confident using AI and see it as a helpful tool. Specialists emphasise that open conversations are essential to ensure more responsible and balanced use at home.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google Earth AI supports disease forecasting and public health planning

Researchers are increasingly combining geospatial data with predictive modelling to anticipate health risks.

In that context, Google has introduced new capabilities within Google Earth AI designed to help public health experts forecast outbreaks and identify vulnerable communities.

The system integrates environmental information such as weather patterns, flooding and air quality with population mobility data and health records.

These insights allow researchers to analyse how environmental conditions influence the spread of diseases, including Dengue Fever and Cholera.

Several research initiatives are already testing the models. In collaboration with the World Health Organisation Regional Office for Africa, forecasting tools combining Google’s time-series models with geospatial data improved cholera prediction accuracy by more than 35 percent.

Academic researchers are also applying the technology to other diseases. Scientists at the University of Oxford have used Earth AI datasets to improve six-month dengue forecasts in Brazil, helping local authorities prepare preventative responses.

The technology is also being tested for chronic disease analysis. In Australia, partnerships with health organisations are exploring how geospatial models can identify regional health needs and support preventative care strategies.

Combining environmental intelligence with health data could enable public health systems to shift from reactive crisis management to earlier detection and prevention of disease outbreaks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK government and Microsoft support digital skills growth

Microsoft UK is the first industry partner in the UK government’s TechFirst program, offering 500 work placements and 5,000 volunteering hours over four years. The collaboration aims to develop AI and technology skills nationwide.

The Department for Science, Innovation and Technology (DSIT) said the partnership will expand digital capabilities in education and the workforce. Microsoft UK CEO Darren Hardman will serve as Social Mobility Champion, linking students and early-career talent with technology-sector opportunities.

TechFirst aims to reach one million secondary students and over 4,000 graduates and researchers, providing school programs, scholarships, doctoral support, and regional funding to connect businesses with local talent.

Microsoft’s commitment includes mentoring and placements to support students entering technology careers.

Scholarships include TechGrad for undergraduates and master’s students, and the Spärck AI Scholarship, supporting AI degrees at nine UK universities, including Cambridge, Oxford, Imperial College, and UCL.

Doctoral researchers benefit from the TechExpert initiative, while the Turing AI Fellowships attract top AI talent to UK institutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI plans to integrate Sora video generation into ChatGPT

According to reports, OpenAI is preparing to integrate its AI video generator Sora directly into ChatGPT, a move that could expand the platform’s capabilities beyond text and image generation.

Sora currently operates as a standalone application and web service. Integrating the tool into ChatGPT could dramatically increase its visibility and usage, particularly given the chatbot’s massive global user base.

The company released an updated version of the model in 2025 that allows users to create, remix and even appear inside AI-generated videos. Bringing those features into ChatGPT would represent a major step toward making video generation a mainstream function within conversational AI systems.

Competition in the generative video market is intensifying. Companies, including Google, are developing similar technologies, with the company’s Gemini platform offering video creation powered by the Veo system. Other developers are also launching text-to-video models as the field rapidly expands.

Despite the potential growth, integrating video generation into ChatGPT may significantly increase operating costs. Running large AI systems requires vast computing resources and energy, and the chatbot already costs billions of dollars annually to operate.

Although OpenAI earns revenue from subscriptions, the majority of ChatGPT users currently use the free version. The company is therefore exploring additional monetisation strategies, including advertising and new premium services.

Integrating Sora into ChatGPT could therefore serve both strategic and financial goals, strengthening the platform’s position in the competitive generative AI market while expanding the types of content users can create.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!