The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.
Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.
The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.
Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.
Congress is encouraged to collaborate closely to implement the framework nationwide.
The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The World Happiness Report 2026 has identified a growing decline in well-being among young people, with increased social media use emerging as a key contributing factor. These findings suggest that digital habits are increasingly shaping life satisfaction, particularly across Western societies.
The report notes that younger age groups now report significantly lower happiness levels compared to previous decades.
In regions such as North America and Western Europe, the decline coincides with a sharp rise in time spent on social media platforms. Researchers highlight that heavy usage is associated with measurable reductions in well-being, especially among younger users.
Alongside these trends, the report continues to rank Finland as the happiest country globally, reflecting broader stability in Nordic nations. However, such stability contrasts with emerging concerns about mental health and social outcomes in more industrialised regions, where digital environments are playing an increasingly influential role.
While the report identifies risks including cyberbullying, depression and online exploitation, it does not advocate for complete restrictions. Instead, it emphasises the need for carefully designed regulatory approaches that balance protection with the potential benefits of digital connectivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.
Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.
Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.
Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.
Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.
Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Telefónica Tech has partnered with three European firms to bring AI and quantum computing closer together. The collaboration aims to improve how advanced models are developed and deployed across different environments.
The initiative brings together Qilimanjaro Quantum Tech, Multiverse Computing and Qcentroid. Their combined expertise is expected to support more efficient, compact and locally deployable AI systems.
Quantum computing is seen as a way to reduce the heavy processing demands of large AI models. Faster computation could yield more accurate results while reducing the time required to solve complex problems.
Each partner contributes specialised capabilities, from quantum hardware and algorithms to software platforms and orchestration tools. These technologies could support applications such as simulations, edge AI and rapid prototyping.
Telefónica Tech is also strengthening its role in integrating AI and quantum solutions for enterprise clients. The move reflects a broader push to build scalable, sovereign and next-generation digital infrastructure in Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A large-scale fraud scheme using AI-generated music has exposed vulnerabilities in streaming platforms and royalty systems. Billions of fake streams were used to divert payments away from legitimate artists and rights holders.
The scheme ran from 2017 to 2024 and involved uploading hundreds of thousands of AI-generated tracks. Automated programs were then used to stream the songs at scale, inflating play counts and generating revenue.
The operation relied on thousands of bot accounts, bulk email registrations and cloud-based systems. Streaming activity was spread across many tracks to reduce detection and maintain consistent earnings over time.
Michael Smith, a 54-year-old from North Carolina, has pleaded guilty to conspiracy to commit wire fraud in federal court. Prosecutors say he obtained more than $10 million and agreed to forfeit over $8 million in proceeds.
Authorities say the case highlights how AI and automation can be used to manipulate digital platforms. The court will determine the final sentence as concerns grow over similar schemes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A growing number of individuals worldwide are participating in a new digital economy built around supplying data for AI systems.
Through platforms such as Kled AI and Silencio, users upload videos, audio recordings and personal interactions in exchange for payment, contributing to the development of increasingly sophisticated AI models.
Such a trend reflects a broader shift in the AI industry, where demand for high-quality human-generated data is rising as traditional web-based sources become more limited.
Researchers suggest that human data remains essential for improving system performance and modelling behaviour beyond existing datasets. As a result, data marketplaces have emerged as an alternative supply mechanism.
Economic considerations often shape participation. In regions facing limited employment opportunities or currency instability, earning income in global currencies can provide a meaningful financial incentive.
At the same time, similar practices are expanding in higher-income countries, where individuals seek supplementary income streams amid rising living costs.
However, the model introduces complex trade-offs.
Contributors may grant extensive usage rights over their data, sometimes on a long-term or irreversible basis. Experts note that such arrangements can reduce control over how personal information is reused, including in contexts not initially anticipated.
Concerns also extend to issues such as data security, transparency and the potential for misuse in areas including synthetic media and identity replication.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Emmanuel Macron has called for stronger enforcement of the EU digital rules, urging Ursula von der Leyen to act against risks linked to foreign interference in elections. The request comes amid growing concern over attempts to influence democratic processes across Europe.
In a letter addressed to the Commission, Macron stressed the importance of safeguarding electoral integrity in a challenging geopolitical environment.
He wrote:
‘In a geopolitical context marked by a multiplication of hostile stances against the European model and its democratic values, it is crucial that the Union… ensure the integrity of civic discourse and electoral processes’.
The proposal focuses on stricter enforcement instead of new legislation, particularly regarding the Digital Services Act. European authorities are encouraged to ensure that online platforms properly assess and mitigate systemic risks, including the spread of manipulated content and coordinated disinformation.
Attention is also directed toward algorithmic amplification, AI-generated content labelling and the removal of fake accounts.
As multiple elections approach across the EU, policymakers are considering how to apply existing regulatory tools more effectively to protect democratic systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The FBI’s New York Field Office has warned that fraudulent tokens impersonating the agency are being airdropped to Tron wallets, with recipients threatened with ‘total block’ of assets unless they submit personal information via phishing sites.
At least 728 wallets were affected, some holding over US$1 million in USDT, when the warning was issued on 19 March.
The scam warns users that their wallets are ‘under investigation’ and instructs them to complete an online anti-money-laundering form. The FBI urged crypto holders to ignore these messages and avoid entering any personal data on linked websites.
Attackers exploit Tron for its fast and low-cost transactions, using bots to distribute tokens widely and generate spoofed addresses.
Impersonation scams have surged dramatically in 2025, with Chainalysis reporting a 1,400% year-over-year increase. Total crypto fraud losses are estimated at US$17 billion, with AI-assisted scams proving far more profitable than traditional schemes.
The FBI previously ran a blockchain sting using Ethereum tokens, resulting in indictments and the seizure of millions in assets.
The bureau encourages anyone who receives the fake FBI tokens to report the incident to the Internet Crime Complaint Centre to help combat ongoing crypto fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.
Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.
The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.
Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.
Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.
Analysts warn that such errors are likely to recur as AI adoption accelerates.
The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mastercard has introduced a generative AI foundation model trained on billions of anonymised transactions. The model is designed as a backend system to power insights across payments and commerce services.
The company plans to extend AI use beyond fraud detection into cybersecurity, loyalty programmes and small-business tools. The model is being developed with support from Nvidia and Databricks technologies.
Earlier AI tools focused on fraud detection, significantly improving accuracy and reducing false positives. The new model marks a shift towards a broader infrastructure approach across multiple products.
This move aligns with Mastercard’s growing reliance on value-added services, which generated over $13 billion in revenue. These services include security, analytics and digital payment solutions beyond the core network.
Competitors such as Visa and PayPal are also expanding AI-driven commerce platforms. The race is intensifying as firms build integrated systems for payments, automation and intelligent services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!