Scotland sets up national AI agency

The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development.

Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation.

Healthcare is a focus for AI adoption, with studies showing that AI tools could improve cancer detection, speed up diagnoses, and reduce workload. Academic projects also aim to develop tools to detect early signs of dementia.

Scottish government officials have acknowledged ethical, workforce and environmental concerns around AI deployment. They say policies will include responsible use, job planning and efforts to maximise renewable energy in support of data infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes scandal puts Elon Musk and X under scrutiny in France

French prosecutors have escalated concerns about deepfakes linked to Elon Musk’s platform X, alerting US authorities to suspicions that manipulated content may have been used to influence the company’s valuation.

According to the Paris prosecutor’s office, the controversy surrounding sexually explicit deepfakes generated by Grok, X’s AI tool, may have been deliberately amplified to artificially boost the value of X and its associated AI entity ahead of a planned stock market listing in June 2026.

Authorities in France confirmed they had contacted the US Department of Justice and legal representatives at the Securities and Exchange Commission to share findings related to the deepfakes investigation and potential financial implications.

The case builds on an ongoing French probe into X, which initially focused on alleged algorithmic interference in domestic politics. Investigations have since expanded to include the spread of Holocaust denial content and the dissemination of sexualised deepfakes through Grok.

French regulators have taken additional steps, including summoning Musk for a voluntary interview and conducting searches at X’s local offices, actions he has described as politically motivated. Parallel investigations have also been launched in the UK and across the European Union into the use of AI tools to generate harmful deepfakes involving women and minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media ban in Ecuador targets youth crime recruitment

A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.

Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.

Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.

Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.

In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US releases national AI policy framework

The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.

Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.

The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.

Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.

Congress is encouraged to collaborate closely to implement the framework nationwide.

The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media linked to declining well-being among young people

The World Happiness Report 2026 has identified a growing decline in well-being among young people, with increased social media use emerging as a key contributing factor. These findings suggest that digital habits are increasingly shaping life satisfaction, particularly across Western societies.

The report notes that younger age groups now report significantly lower happiness levels compared to previous decades.

In regions such as North America and Western Europe, the decline coincides with a sharp rise in time spent on social media platforms. Researchers highlight that heavy usage is associated with measurable reductions in well-being, especially among younger users.

Alongside these trends, the report continues to rank Finland as the happiest country globally, reflecting broader stability in Nordic nations. However, such stability contrasts with emerging concerns about mental health and social outcomes in more industrialised regions, where digital environments are playing an increasingly influential role.

While the report identifies risks including cyberbullying, depression and online exploitation, it does not advocate for complete restrictions. Instead, it emphasises the need for carefully designed regulatory approaches that balance protection with the potential benefits of digital connectivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Deepfake abuse crisis escalates worldwide

AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.

Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.

Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.

Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.

Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.

Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human data demand fuels new global digital economy

A growing number of individuals worldwide are participating in a new digital economy built around supplying data for AI systems.

Through platforms such as Kled AI and Silencio, users upload videos, audio recordings and personal interactions in exchange for payment, contributing to the development of increasingly sophisticated AI models.

Such a trend reflects a broader shift in the AI industry, where demand for high-quality human-generated data is rising as traditional web-based sources become more limited.

Researchers suggest that human data remains essential for improving system performance and modelling behaviour beyond existing datasets. As a result, data marketplaces have emerged as an alternative supply mechanism.

Economic considerations often shape participation. In regions facing limited employment opportunities or currency instability, earning income in global currencies can provide a meaningful financial incentive.

At the same time, similar practices are expanding in higher-income countries, where individuals seek supplementary income streams amid rising living costs.

However, the model introduces complex trade-offs.

Contributors may grant extensive usage rights over their data, sometimes on a long-term or irreversible basis. Experts note that such arrangements can reduce control over how personal information is reused, including in contexts not initially anticipated.

Concerns also extend to issues such as data security, transparency and the potential for misuse in areas including synthetic media and identity replication.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to strengthen digital safeguards ahead of elections

Emmanuel Macron has called for stronger enforcement of the EU digital rules, urging Ursula von der Leyen to act against risks linked to foreign interference in elections. The request comes amid growing concern over attempts to influence democratic processes across Europe.

In a letter addressed to the Commission, Macron stressed the importance of safeguarding electoral integrity in a challenging geopolitical environment.

He wrote:

‘In a geopolitical context marked by a multiplication of hostile stances against the European model and its democratic values, it is crucial that the Union… ensure the integrity of civic discourse and electoral processes’.

The proposal focuses on stricter enforcement instead of new legislation, particularly regarding the Digital Services Act. European authorities are encouraged to ensure that online platforms properly assess and mitigate systemic risks, including the spread of manipulated content and coordinated disinformation.

Attention is also directed toward algorithmic amplification, AI-generated content labelling and the removal of fake accounts.

As multiple elections approach across the EU, policymakers are considering how to apply existing regulatory tools more effectively to protect democratic systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

DoorDash launches Tasks app to train AI robots with gig workers

A new wave of AI development is increasingly relying on real-world human behaviour, with DoorDash moving to tap its gig workforce to generate training data for robotics systems.

DoorDash has launched a standalone app called Tasks, allowing couriers to earn money by recording themselves performing everyday activities such as folding clothes, washing dishes or making a bed. The collected data is used to train AI and robotics models to understand physical environments and human interactions better.

The move reflects a broader shift in AI training, where companies are seeking physical, real-world data rather than relying solely on text and images. Such data is essential for building systems capable of performing tasks in dynamic environments, including humanoid robots and autonomous machines.

Other companies are pursuing similar strategies. Uber and Instawork have tested gig-based data-collection models, while robotics startups are using wearable devices, such as gloves and head-mounted cameras, to capture detailed motion data for training.

The Tasks app is currently being rolled out as a pilot, with DoorDash planning to expand the types of available assignments over time. Some tasks may also be integrated into the main Dasher app, including activities that support navigation or assist autonomous delivery systems.

As competition intensifies, access to large-scale physical data is becoming a critical advantage. DoorDash’s approach highlights how gig-economy platforms are increasingly integrated into the development of next-generation AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US senator proposes AI rules for children

A US senator has introduced a draft framework to establish nationwide AI rules, with a focus on child safety and copyright protection. The proposal seeks to create a unified federal approach to replace state laws that differ.

The plan would require developers to implement safeguards for minors, including age verification, data protection and mechanisms to report harm. Companies could also face legal action over failures linked to AI system design.

Copyright measures include new standards for identifying AI-generated content and preventing tampering. Authorities would also develop cybersecurity guidelines to support the transparency and authenticity of content.

Debate over this in the US continues over the balance between regulation and innovation, with some stakeholders warning of legal and economic risks. Discussions between lawmakers and the administration are expected to shape a final framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot