UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms Japanese education while raising ethical questions

AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively.

In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable.

Experts praised the system’s potential but warned against excessive dependence on algorithms. Keio University’s Professor Makiko Nakamuro said educators must balance data-driven insights with privacy safeguards and human judgment. Toda City has already banned discriminatory use of AI results.

AI’s role is also expanding in language learning. Universities such as Waseda and Kyushu now use a Tokyo-developed conversation AI that assesses grammar, pronunciation, and confidence. Students say they feel more comfortable practising with a machine than in front of classmates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU expands AI reach through new antenna network

The European Commission has launched new ‘AI Antennas’ across 13 European countries to strengthen AI infrastructure. Seven EU states, including Belgium, Ireland, and Malta, will gain access to high-performance computing through the EuroHPC network.

Six non-EU partners, such as the UK and Switzerland, have also joined the initiative. Their inclusion reflects the EU’s growing cooperation on digital innovation with neighbouring countries despite Brexit and other trade tensions.

Each AI Antenna will serve as a local gateway to the bloc’s supercomputing hubs, providing technical support, training, and algorithmic resources. Countries without an AI Factory of their own can now connect remotely to major systems like Jupiter.

The Commission says the network aims to spread AI skills and research capabilities across Europe, narrowing regional gaps in digital development. However, smaller nations hosting only antennas are unlikely to house the bloc’s future ‘AI Gigafactories’, which will be up to four times more powerful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Swiss scientists grow mini-brains to power future computers

In a Swiss laboratory, researchers are using clusters of human brain cells to power experimental computers. The start-up FinalSpark is leading this emerging field of biocomputing, also known as wetware, which uses living neurons instead of silicon chips.

Co-founder Fred Jordan said biological neurons are vastly more energy-efficient than artificial ones and could one day replace traditional processors. He believes brain-based computing may eventually help reduce the massive power demands created by AI systems.

Each ‘bioprocessor’ is made from human skin cells reprogrammed into neurons and grouped into small organoids. Electrodes connect to these clumps, allowing the Swiss scientists to send signals and measure their responses in a digital form similar to binary code.

Scientists emphasise that the technology is still in its infancy and not capable of consciousness. Each organoid contains about ten thousand neurons, compared to a human brain’s hundred billion. FinalSpark collaborates with ethicists to ensure the research remains responsible and transparent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Startup raises $9m to orchestrate Gulf digital infrastructure

Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.

Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.

Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.

Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.

First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage shows the cost of cloud concentration

A single fault can bring down the modern web. During the outage on Monday, 20 October 2025, millions woke to broken apps, games, banking, and tools after database errors at Amazon Web Services rippled outward. When a shared backbone stumbles, the blast radius engulfs everything from chat to commerce.

The outage underscored cloud concentration risk. Roblox, Fortnite, Pokémon Go, Snapchat, and workplace staples like Slack and Monday.com stumbled together because many depend on the same region and data layer. Failover, throttling, and retries help, but simultaneous strain can swamp safeguards.

On Friday, 19 July 2024, a faulty CrowdStrike update crashed Windows machines worldwide, triggering blue screens that grounded flights, delayed surgeries, and froze point-of-sale systems. The fix was simple; recovery wasn’t. Friday patches gained a new cautionary tale.

Earlier shocks foreshadowed today’s scale. In 1997, a Network Solutions glitch briefly hobbled .com and .net. In 2018, malware in Alaska’s Matanuska-Susitna knocked services offline, sending a community of 100,000 back to paper. Each incident showed how mundane errors cascade into civic life.

Resilience now means multi-region designs, cross-cloud failovers, tested runbooks, rate-limit backstops, and graceful read-only modes. Add regulatory stress tests, clear incident comms, and sector drills with hospitals, airlines, and banks. The internet will keep breaking; our job is to make it bend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SMEs underinsured as Canada’s cyber landscape shifts

Canada’s cyber insurance market is stabilising, with stronger underwriting, steadier loss trends, and more product choice, the Insurance Bureau of Canada says. But the threat landscape is accelerating as attackers weaponise AI, leaving many small and medium-sized enterprises exposed and underinsured.

Rapid market growth brought painful losses during the ransomware surge: from 2019 to 2023, combined loss ratios averaged about 155%, forcing tighter pricing and coverage. Insurers have recalibrated, yet rising AI-enabled phishing and deepfake impersonations are lifting complexity and potential severity.

Policy is catching up unevenly. Bill C-8 in Canada would revive critical-infrastructure cybersecurity standards, stronger oversight, and baseline rules for risk management and incident reporting. Public–private programmes signal progress but need sustained execution.

SMEs remain the pressure point. Low uptake means minor breaches can cost tens or hundreds of thousands, while severe incidents can be fatal. Underinsurance shifts shock to the wider economy, challenging insurers to balance affordability with long-term viability.

The Bureau urges practical resilience: clearer governance, employee training, incident playbooks, and fit-for-purpose cover. Education campaigns and free guidance aim to demystify coverage, boost readiness, and help SMEs recover faster when attacks hit, supporting a more durable digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin wallet vulnerability exposes thousands of private keys

A flaw in the widely used Libbitcoin Explorer (bx) 3.x series has exposed over 120,000 Bitcoin private keys, according to crypto wallet provider OneKey. The flaw arose from a weak random number generator that used system time, making wallet keys predictable.

Attackers aware of wallet creation times could reconstruct private keys and access funds.

Several wallets were affected, including versions of Trust Wallet Extension and Trust Wallet Core prior to patched releases. Researchers said the Mersenne Twister-32’s limited seed space let hackers automate attacks and recreate private keys, possibly causing past fund losses like the ‘Milk Sad’ cases.

OneKey confirmed its own wallets remain secure, using cryptographically strong random number generation and hardware Secure Elements certified to global security standards.

OneKey also examined its software wallets, ensuring that desktop, browser, Android, and iOS versions rely on secure system-level entropy sources. The firm urged long-term crypto holders to use hardware wallets and avoid importing software-generated mnemonics to reduce risk.

The company emphasised that wallet security depends on the integrity of the device and operating environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!