FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump threatens sanctions on EU over Digital Services Act

Only five days after the Joint Statement on a United States-European Union framework on an agreement on reciprocal, fair and balanced trade (‘Framework Agreement’), the Trump administration is weighing an unprecedented step against the EU over its new tech rules.

According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.

While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.

Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.

Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.

‘Freedom of expression is a fundamental right in the EU. It lies at the heart of the DSA,’ an EU Commission spokesperson said, rejecting US accusations as ‘completely unfounded.’

Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.

US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.

The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s overuse of the em dash could be your biggest giveaway

AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.

Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.

Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.

Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky shuts down in Mississippi over new age law

Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.

The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.

The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.

US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seemingly conscious AI may cause psychological problems and AI psychosis

Microsoft’s AI chief and DeepMind co-founder, Mustafa Suleyman, has warned that society is unprepared for AI systems that convincingly mimic human consciousness. He warned that ‘seemingly conscious’ AI could make the public treat machines as sentient.

Suleyman highlighted potential risks including demands for AI rights, welfare, and even AI citizenship. Since the launch of ChatGPT in 2022, AI developers have increasingly designed systems to act ‘more human’.

Experts caution that such technology could intensify mental health problems and distort perceptions of reality. The phenomenon known as AI Psychosis sees users forming intense emotional attachments or believing AI to be conscious or divine.

Suleyman called for clear boundaries in AI development, emphasising that these systems should be tools for people rather than digital persons. He urged careful management of human-AI interaction without calling for a halt to innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mount Fuji eruption simulated in an AI video for Tokyo

Residents of Tokyo have been shown a stark warning of what could happen if Mount Fuji erupts.

The metropolitan government released a three-minute AI-generated video depicting the capital buried in volcanic ash to raise awareness and urge preparation.

The simulation shows thick clouds of ash descending on Shibuya and other districts about one to two hours after an eruption, with up to 10 centimetres expected to accumulate. Unlike snow, volcanic ash does not melt away but instead hardens, damages powerlines, and disrupts communications once wet.

The video also highlights major risks to transport. Ash on train tracks, runways, and roads would halt trains, ground planes, and make driving perilous.

Two-wheel vehicles could become unusable under even modest ashfall. Power outages and shortages of food and supplies are expected as shops run empty, echoing the disruption seen after the 2011 earthquake.

Officials advise people to prepare masks, goggles, and at least three days of emergency food. The narrator warns that because no one knows when Mount Fuji might erupt, daily preparedness in Japan is vital to protect health, infrastructure, and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to spot AI-generated videos with simple visual checks

Mashable offers a hands-on guide to help users detect AI-generated videos by observing subtle technical cues. Key warning signs include mismatched lip movements and speech, where voices are dubbed over real footage and audio isn’t perfectly aligned with mouth motions.

Users are also advised to look for visual anomalies such as unnatural blurs, distorted shadows or odd lighting effects that seem inconsistent with natural environments. Deepfake videos can show slight flickers around faces or uneven reflections that betray their artificial origin.

Blinking, or the lack thereof, can also be revealing. AI faces often fail to replicate natural blinking patterns, and may display either no blinking or irregular frequency.

Viewers should also note unnatural head or body movements that do not align with speech or emotional expression, such as stiff postures or awkward gestures.

Experts stress these cues are increasingly well-engineered, making deepfakes harder to detect visually. They recommend combining observation with source verification, such as tracing the video back to reputable outlets or conducting reverse image searches for robust protection.

Ultimately, better detection tools and digital media literacy are essential to maintaining trust in online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!