What happens to software careers in the AI era

AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisations that build and run digital products every day. In the blog ‘Why the software developer career may (not) survive: Diplo’s experience‘, Jovan Kurbalija argues that while AI is making large parts of traditional coding less valuable, it is also opening a new professional lane for people who can embed, configure, and improve AI systems in real-world settings.

Kurbalija begins with a personal anecdote, a Sunday brunch conversation with a young CERN programmer who believes AI has already made human coding obsolete. Yet the discussion turns toward a more hopeful conclusion.

The core of software work, in this view, is not disappearing so much as moving away from typing syntax and toward directing AI tools, shaping outcomes, and ensuring what is produced actually fits human needs.

One sign of the transition is the rise of describing apps in everyday language and receiving working code in seconds, often referred to as ‘vibe coding.’ As AI tools take over boilerplate code, basic debugging, and routine code review, the ‘bad news’ is clear: many tasks developers were trained for are fading.

The ‘good news,’ Kurbalija writes, is that teams can spend less time on repetitive work and more time on higher-value decisions that determine whether technology is useful, safe, and trusted. A central theme is that developers may increasingly be judged by their ability to bridge the gap between neat code and messy reality.

That means listening closely, asking better questions, navigating organisational politics, and understanding what users mean rather than only what they say. Kurbalija suggests hiring signals could shift accordingly, with employers valuing empathy and imagination, sometimes even seeing artistic or humanistic interests as evidence of stronger judgment in complex human environments.

Another pressure point is what he calls AI’s ‘paradox of plenty.’ If AI makes building easier, the harder question becomes what to build, what to prioritise, and what not to automate.

In that landscape, the scarce skill is not writing code quickly but framing the right problem, defining success, balancing trade-offs, and spotting where technology introduces new risks, especially in large organisations where ‘requirements’ can hide unresolved conflicts.

Kurbalija also argues that AI-era systems will be more interconnected and fragile, turning developers into orchestrators of complexity across services, APIs, agents, and vendors. When failures cascade or accountability becomes blurred, teams still need people who can design for resilience, privacy, and observability and who can keep systems understandable as tools and models change.

Some tasks, like debugging and security audits, may remain more human-led in the near term, even if that window narrows as AI improves.

Transformation of Diplo is presented as a practical case study of the broader shift. Kurbalija describes a move from a technology-led phase toward a more content and human-led approach, where the decisive factor is not which model is used but how well knowledge is prepared, labelled, evaluated, and embedded into workflows, and how effectively people adapt to constant change.

His bottom line is stark. Many developers will struggle, but those who build strong non-coding skills, communication, systems thinking, product judgment, and comfort with uncertainty may do exceptionally well in the new era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated song removed from Swedish rankings

Sweden has removed a chart-topping song from its official rankings after ruling it was mainly created using AI. The track had attracted millions of streams on Spotify within weeks.

Industry investigators found no public profile for the artist, later linking the song to executives at a music firm using AI tools. Producers insisted that technology merely assisted a human-led creative process.

Music organisations say AI-generated tracks threaten existing industry rules and creator revenues. The decision intensifies debate over how to regulate AI in cultural markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces perilous legal challenge over child safety concerns

British parents suing TikTok over the deaths of their children have called for greater accountability from the platform, as the case begins hearings in the United States. One of the claimants said social media companies must be held accountable for the content shown to young users.

Ellen Roome, whose son died in 2022, said the lawsuit is about understanding what children were exposed to online.

The legal filing claims the deaths were a foreseeable result of TikTok’s design choices, which allegedly prioritised engagement over safety. TikTok has said it prohibits content that encourages dangerous behaviour.

Roome is also campaigning for proposed legislation that would allow parents to access their children’s social media accounts after a death. She said the aim is to gain clarity and prevent similar tragedies.

TikTok said it removes most harmful content before it is reported and expressed sympathy for the families. The company is seeking to dismiss the case, arguing that the US court lacks jurisdiction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!