Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms Japanese education while raising ethical questions

AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively.

In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable.

Experts praised the system’s potential but warned against excessive dependence on algorithms. Keio University’s Professor Makiko Nakamuro said educators must balance data-driven insights with privacy safeguards and human judgment. Toda City has already banned discriminatory use of AI results.

AI’s role is also expanding in language learning. Universities such as Waseda and Kyushu now use a Tokyo-developed conversation AI that assesses grammar, pronunciation, and confidence. Students say they feel more comfortable practising with a machine than in front of classmates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Startup raises $9m to orchestrate Gulf digital infrastructure

Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.

Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.

Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.

Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.

First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage shows the cost of cloud concentration

A single fault can bring down the modern web. During the outage on Monday, 20 October 2025, millions woke to broken apps, games, banking, and tools after database errors at Amazon Web Services rippled outward. When a shared backbone stumbles, the blast radius engulfs everything from chat to commerce.

The outage underscored cloud concentration risk. Roblox, Fortnite, Pokémon Go, Snapchat, and workplace staples like Slack and Monday.com stumbled together because many depend on the same region and data layer. Failover, throttling, and retries help, but simultaneous strain can swamp safeguards.

On Friday, 19 July 2024, a faulty CrowdStrike update crashed Windows machines worldwide, triggering blue screens that grounded flights, delayed surgeries, and froze point-of-sale systems. The fix was simple; recovery wasn’t. Friday patches gained a new cautionary tale.

Earlier shocks foreshadowed today’s scale. In 1997, a Network Solutions glitch briefly hobbled .com and .net. In 2018, malware in Alaska’s Matanuska-Susitna knocked services offline, sending a community of 100,000 back to paper. Each incident showed how mundane errors cascade into civic life.

Resilience now means multi-region designs, cross-cloud failovers, tested runbooks, rate-limit backstops, and graceful read-only modes. Add regulatory stress tests, clear incident comms, and sector drills with hospitals, airlines, and banks. The internet will keep breaking; our job is to make it bend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SMEs underinsured as Canada’s cyber landscape shifts

Canada’s cyber insurance market is stabilising, with stronger underwriting, steadier loss trends, and more product choice, the Insurance Bureau of Canada says. But the threat landscape is accelerating as attackers weaponise AI, leaving many small and medium-sized enterprises exposed and underinsured.

Rapid market growth brought painful losses during the ransomware surge: from 2019 to 2023, combined loss ratios averaged about 155%, forcing tighter pricing and coverage. Insurers have recalibrated, yet rising AI-enabled phishing and deepfake impersonations are lifting complexity and potential severity.

Policy is catching up unevenly. Bill C-8 in Canada would revive critical-infrastructure cybersecurity standards, stronger oversight, and baseline rules for risk management and incident reporting. Public–private programmes signal progress but need sustained execution.

SMEs remain the pressure point. Low uptake means minor breaches can cost tens or hundreds of thousands, while severe incidents can be fatal. Underinsurance shifts shock to the wider economy, challenging insurers to balance affordability with long-term viability.

The Bureau urges practical resilience: clearer governance, employee training, incident playbooks, and fit-for-purpose cover. Education campaigns and free guidance aim to demystify coverage, boost readiness, and help SMEs recover faster when attacks hit, supporting a more durable digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated images used in jewellery scam

A jeweller in Hove is dealing with daily complaints from customers of a similarly named but fraudulent business. Stevie Holmes runs Scarlett Jewellery but keeps receiving complaints from customers who confused it with the AI-driven Scarlett Jewels website.

Many reported receiving poor-quality goods or nothing at all.

Holmes said the mix-ups have kept her occupied for at least an hour a day since July. Without clarification, people could post negative comments about her genuine business on social media, potentially damaging its reputation.

Scarlett Jewels is run by Denimtex Limited with an address in Hong Kong, though its website claims a personal story of a retiring designer.

Experts say such scams are increasingly common due to how easy and cheap it is to create AI images. Professor Ana Canhoto from the University of Sussex noted AI-generated product photos often appear too perfect or flawed, while fake reviews and claims of scarcity are typical tactics to mislead buyers.

Trustpilot ratings for Scarlett Jewels are mostly one star, with customers describing items as ‘tat’ or ‘poor quality’.

Authorities are taking action, with the Advertising Standards Authority banning similar ads and Facebook restricting Scarlett Jewels from creating new adverts. Buyers are advised to spot off AI images, large discounts, and genuine reviews to avoid falling for scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin wallet vulnerability exposes thousands of private keys

A flaw in the widely used Libbitcoin Explorer (bx) 3.x series has exposed over 120,000 Bitcoin private keys, according to crypto wallet provider OneKey. The flaw arose from a weak random number generator that used system time, making wallet keys predictable.

Attackers aware of wallet creation times could reconstruct private keys and access funds.

Several wallets were affected, including versions of Trust Wallet Extension and Trust Wallet Core prior to patched releases. Researchers said the Mersenne Twister-32’s limited seed space let hackers automate attacks and recreate private keys, possibly causing past fund losses like the ‘Milk Sad’ cases.

OneKey confirmed its own wallets remain secure, using cryptographically strong random number generation and hardware Secure Elements certified to global security standards.

OneKey also examined its software wallets, ensuring that desktop, browser, Android, and iOS versions rely on secure system-level entropy sources. The firm urged long-term crypto holders to use hardware wallets and avoid importing software-generated mnemonics to reduce risk.

The company emphasised that wallet security depends on the integrity of the device and operating environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot