French regulator fines Free and Free Mobile €42 million

France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.

Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.

The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.

Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Questions mount over AI-generated artist

An artist called Sienna Rose has drawn millions of streams on Spotify, despite strong evidence suggesting she is AI-generated. Several of her jazz-influenced soul tracks have gone viral, with one surpassing five million plays.

Streaming platform Deezer says many of its songs have been flagged as AI-made using detection tools that identify technical artefacts in the audio. Signs include an unusually high volume of releases, generic sound patterns and a complete absence of live performances or online presence.

The mystery intensified after pop star Selena Gomez briefly shared one of Rose’s tracks on social media, only for it to be removed amid growing scrutiny. Record labels linked to Rose have declined to clarify whether a human performer exists.

The case highlights mounting concern across the industry as AI music floods streaming services. Artists, including Raye and Paul McCartney, have warned audiences that they still value emotional authenticity over algorithmic output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Irish government eyes leadership role in AI innovation after US visit

Irish Tánaiste Simon Harris said that AI is no longer a distant concept but is already integrated into everyday life and economic systems, following a visit to California where he discussed technology and innovation with business and political leaders.

He described the current period as an ‘AI moment’ and stressed that Ireland has an opportunity to lead in the next wave of technological development.

Harris announced that Ireland will host a dedicated AI summit to explore how the opportunities presented by AI can benefit all sections of society, highlighting the need for trust, responsibility and confidence in how the technology is adopted.

He cautioned that harms can arise without proper governance, pointing to recent controversies over deepfakes and the misuse of AI tools as examples of risks policymakers must address.

His comments come amid broader efforts to strengthen Ireland’s economic and innovation ties with the United States, including meetings with California officials and global tech companies during his official visit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU revises Cybersecurity Act to streamline certification

The European Commission plans to revise the Cybersecurity Act to expand certification schemes beyond ICT products and services. Future assessments would also cover companies’ overall risk-management posture, including governance and supply-chain practices.

Only one EU-wide scheme, the Common Criteria framework, has been formally adopted since 2019. Cloud, 5G, and digital identity certifications remain stalled due to procedural complexity and limited transparency under the current Cybersecurity Act framework.

The reforms aim to introduce clearer rules and a rolling work programme to support long-term planning. Managed security services, including incident response and penetration testing, would become eligible for EU certification.

ENISA would take on a stronger role as the central technical coordinator across member states. Additional funding and staff would be required to support its expanding mandate under the newer cybersecurity laws.

Stakeholders broadly support harmonisation to reduce administrative burden and regulatory fragmentation. The European Commission says organisational certification would assess cybersecurity maturity alongside technical product compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CIRO discloses scale of August 2025 cyber incident

Canada’s investment regulator has confirmed a major data breach affecting around 750,000 people after a phishing attack in August 2025.

The Canadian Investment Regulatory Organization (CIRO) said threat actors accessed and copied a limited set of investigative, compliance, and market surveillance data. Some internal systems were taken offline as a precaution, but core regulatory operations continued across the country.

CIRO reported that personal and financial information was exposed, including income details, identification records, contact information, account numbers, and financial statements collected during regulatory activities in Canada.

No passwords or PINs were compromised, and the organisation said there is no evidence that the stolen data has been misused or shared on the dark web.

Affected individuals are being offered two years of free credit monitoring and identity theft protection as CIRO continues to monitor for further malicious activity nationwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

What happens to software careers in the AI era

AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisations that build and run digital products every day. In the blog ‘Why the software developer career may (not) survive: Diplo’s experience‘, Jovan Kurbalija argues that while AI is making large parts of traditional coding less valuable, it is also opening a new professional lane for people who can embed, configure, and improve AI systems in real-world settings.

Kurbalija begins with a personal anecdote, a Sunday brunch conversation with a young CERN programmer who believes AI has already made human coding obsolete. Yet the discussion turns toward a more hopeful conclusion.

The core of software work, in this view, is not disappearing so much as moving away from typing syntax and toward directing AI tools, shaping outcomes, and ensuring what is produced actually fits human needs.

One sign of the transition is the rise of describing apps in everyday language and receiving working code in seconds, often referred to as ‘vibe coding.’ As AI tools take over boilerplate code, basic debugging, and routine code review, the ‘bad news’ is clear: many tasks developers were trained for are fading.

The ‘good news,’ Kurbalija writes, is that teams can spend less time on repetitive work and more time on higher-value decisions that determine whether technology is useful, safe, and trusted. A central theme is that developers may increasingly be judged by their ability to bridge the gap between neat code and messy reality.

That means listening closely, asking better questions, navigating organisational politics, and understanding what users mean rather than only what they say. Kurbalija suggests hiring signals could shift accordingly, with employers valuing empathy and imagination, sometimes even seeing artistic or humanistic interests as evidence of stronger judgment in complex human environments.

Another pressure point is what he calls AI’s ‘paradox of plenty.’ If AI makes building easier, the harder question becomes what to build, what to prioritise, and what not to automate.

In that landscape, the scarce skill is not writing code quickly but framing the right problem, defining success, balancing trade-offs, and spotting where technology introduces new risks, especially in large organisations where ‘requirements’ can hide unresolved conflicts.

Kurbalija also argues that AI-era systems will be more interconnected and fragile, turning developers into orchestrators of complexity across services, APIs, agents, and vendors. When failures cascade or accountability becomes blurred, teams still need people who can design for resilience, privacy, and observability and who can keep systems understandable as tools and models change.

Some tasks, like debugging and security audits, may remain more human-led in the near term, even if that window narrows as AI improves.

Transformation of Diplo is presented as a practical case study of the broader shift. Kurbalija describes a move from a technology-led phase toward a more content and human-led approach, where the decisive factor is not which model is used but how well knowledge is prepared, labelled, evaluated, and embedded into workflows, and how effectively people adapt to constant change.

His bottom line is stark. Many developers will struggle, but those who build strong non-coding skills, communication, systems thinking, product judgment, and comfort with uncertainty may do exceptionally well in the new era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!