New IBM offering blends expert teams and AI digital workers for enterprise scale

IBM has unveiled a new consulting service designed to help organisations deploy and scale enterprise AI by pairing human experts with digital workers powered by AI.

The approach aims to address common challenges in AI adoption, such as skills gaps, governance, and integration with legacy systems, by combining domain expertise with automated AI capabilities that can execute repetitive and data-intensive tasks.

The service positions digital workers as extensions of human teams, enabling enterprises to accelerate workflows across areas such as finance, supply chain, customer service and IT operations. IBM emphasises that human specialists remain central to strategy, oversight and ethical use of AI, while digital workers support execution and scalability.

The offering includes guidance on governance frameworks, model choice, data architecture and change management to ensure responsible, secure and efficient deployment of AI technologies at scale.

IBM’s hybrid model reflects a broader industry trend toward human-AI collaboration, where AI amplifies professional capabilities while preserving human decision-making and oversight.

The company believes this will help organisations achieve measurable business outcomes faster than traditional AI implementations that rely solely on technology teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Finnish data breach exposed thousands of patients

A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.

Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.

The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.

Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French regulator fines Free and Free Mobile €42 million

France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.

Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.

The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.

Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI guidance released for UK tax professionals by leading bodies

Several UK professional organisations for tax practitioners, including the Chartered Institute of Taxation (CIOT) and the Society of Trust and Estate Practitioners (STEP), have published new AI guidance for members.

The documents aim to help tax professionals understand how to adopt AI tools securely and responsibly while maintaining professional standards and compliance with legal and regulatory frameworks.

The guidance stresses that members should be aware of risks associated with AI, including data quality, bias, model limitations and the need for human oversight. It encourages firms to implement robust governance, clear policies on use, appropriate training and verification processes where outputs affect client advice or statutory obligations.

By highlighting best practices, the professional bodies seek to balance the benefits of generative AI, such as improved efficiency and research assistance, with ethical considerations and core professional responsibilities.

The guidance also points to data-protection obligations under UK law and the importance of maintaining client confidentiality when using third-party AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI adoption trails in South Africa

South Africa’s rate of AI implementation is roughly half that of the US, according to insights from Specno. Analysts attribute the gap to shortages in skills, weak data infrastructure and limited alignment between AI projects and core business strategy.

Despite moderate AI readiness levels, execution remains a major challenge across South African organisations. Skills shortages, insufficient workforce training and weak organisational readiness continue to prevent AI systems from moving beyond pilot stages.

Industry experts say many executives recognise the value of AI but struggle to adopt it in practice. Constraints include low IT maturity, risk aversion and organisational cultures that resist large-scale transformation.

By contrast, companies in the US are embedding AI into operations, talent development and decision-making. Analysts say South Africa must rapidly improve executive literacy, data ecosystems and practical skills to close the gap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Questions mount over AI-generated artist

An artist called Sienna Rose has drawn millions of streams on Spotify, despite strong evidence suggesting she is AI-generated. Several of her jazz-influenced soul tracks have gone viral, with one surpassing five million plays.

Streaming platform Deezer says many of its songs have been flagged as AI-made using detection tools that identify technical artefacts in the audio. Signs include an unusually high volume of releases, generic sound patterns and a complete absence of live performances or online presence.

The mystery intensified after pop star Selena Gomez briefly shared one of Rose’s tracks on social media, only for it to be removed amid growing scrutiny. Record labels linked to Rose have declined to clarify whether a human performer exists.

The case highlights mounting concern across the industry as AI music floods streaming services. Artists, including Raye and Paul McCartney, have warned audiences that they still value emotional authenticity over algorithmic output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

What happens to software careers in the AI era

AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisations that build and run digital products every day. In the blog ‘Why the software developer career may (not) survive: Diplo’s experience‘, Jovan Kurbalija argues that while AI is making large parts of traditional coding less valuable, it is also opening a new professional lane for people who can embed, configure, and improve AI systems in real-world settings.

Kurbalija begins with a personal anecdote, a Sunday brunch conversation with a young CERN programmer who believes AI has already made human coding obsolete. Yet the discussion turns toward a more hopeful conclusion.

The core of software work, in this view, is not disappearing so much as moving away from typing syntax and toward directing AI tools, shaping outcomes, and ensuring what is produced actually fits human needs.

One sign of the transition is the rise of describing apps in everyday language and receiving working code in seconds, often referred to as ‘vibe coding.’ As AI tools take over boilerplate code, basic debugging, and routine code review, the ‘bad news’ is clear: many tasks developers were trained for are fading.

The ‘good news,’ Kurbalija writes, is that teams can spend less time on repetitive work and more time on higher-value decisions that determine whether technology is useful, safe, and trusted. A central theme is that developers may increasingly be judged by their ability to bridge the gap between neat code and messy reality.

That means listening closely, asking better questions, navigating organisational politics, and understanding what users mean rather than only what they say. Kurbalija suggests hiring signals could shift accordingly, with employers valuing empathy and imagination, sometimes even seeing artistic or humanistic interests as evidence of stronger judgment in complex human environments.

Another pressure point is what he calls AI’s ‘paradox of plenty.’ If AI makes building easier, the harder question becomes what to build, what to prioritise, and what not to automate.

In that landscape, the scarce skill is not writing code quickly but framing the right problem, defining success, balancing trade-offs, and spotting where technology introduces new risks, especially in large organisations where ‘requirements’ can hide unresolved conflicts.

Kurbalija also argues that AI-era systems will be more interconnected and fragile, turning developers into orchestrators of complexity across services, APIs, agents, and vendors. When failures cascade or accountability becomes blurred, teams still need people who can design for resilience, privacy, and observability and who can keep systems understandable as tools and models change.

Some tasks, like debugging and security audits, may remain more human-led in the near term, even if that window narrows as AI improves.

Transformation of Diplo is presented as a practical case study of the broader shift. Kurbalija describes a move from a technology-led phase toward a more content and human-led approach, where the decisive factor is not which model is used but how well knowledge is prepared, labelled, evaluated, and embedded into workflows, and how effectively people adapt to constant change.

His bottom line is stark. Many developers will struggle, but those who build strong non-coding skills, communication, systems thinking, product judgment, and comfort with uncertainty may do exceptionally well in the new era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT advances cooling for scalable quantum chips

MIT researchers have demonstrated a faster, more energy-efficient cooling technique for scalable trapped-ion quantum chips. The solution addresses a long-standing challenge in reducing vibration-related errors that limit the performance of quantum systems.

The method uses integrated photonic chips with nanoscale antennas that emit tightly controlled light beams. Using polarisation-gradient cooling, the system cools ions to nearly ten times below standard laser limits, and does so much faster.

Unlike conventional trapped-ion systems that depend on bulky external optics, the chip-based design generates stable light patterns directly on the device. The stability improves accuracy and supports scaling to thousands of ions on a single chip.

Researchers say the breakthrough lays the groundwork for more reliable quantum operations and opens new possibilities for advanced ion control, bringing practical, large-scale quantum computing closer to reality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!