Irish government eyes leadership role in AI innovation after US visit

Irish Tánaiste Simon Harris said that AI is no longer a distant concept but is already integrated into everyday life and economic systems, following a visit to California where he discussed technology and innovation with business and political leaders.

He described the current period as an ‘AI moment’ and stressed that Ireland has an opportunity to lead in the next wave of technological development.

Harris announced that Ireland will host a dedicated AI summit to explore how the opportunities presented by AI can benefit all sections of society, highlighting the need for trust, responsibility and confidence in how the technology is adopted.

He cautioned that harms can arise without proper governance, pointing to recent controversies over deepfakes and the misuse of AI tools as examples of risks policymakers must address.

His comments come amid broader efforts to strengthen Ireland’s economic and innovation ties with the United States, including meetings with California officials and global tech companies during his official visit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What happens to software careers in the AI era

AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisations that build and run digital products every day. In the blog ‘Why the software developer career may (not) survive: Diplo’s experience‘, Jovan Kurbalija argues that while AI is making large parts of traditional coding less valuable, it is also opening a new professional lane for people who can embed, configure, and improve AI systems in real-world settings.

Kurbalija begins with a personal anecdote, a Sunday brunch conversation with a young CERN programmer who believes AI has already made human coding obsolete. Yet the discussion turns toward a more hopeful conclusion.

The core of software work, in this view, is not disappearing so much as moving away from typing syntax and toward directing AI tools, shaping outcomes, and ensuring what is produced actually fits human needs.

One sign of the transition is the rise of describing apps in everyday language and receiving working code in seconds, often referred to as ‘vibe coding.’ As AI tools take over boilerplate code, basic debugging, and routine code review, the ‘bad news’ is clear: many tasks developers were trained for are fading.

The ‘good news,’ Kurbalija writes, is that teams can spend less time on repetitive work and more time on higher-value decisions that determine whether technology is useful, safe, and trusted. A central theme is that developers may increasingly be judged by their ability to bridge the gap between neat code and messy reality.

That means listening closely, asking better questions, navigating organisational politics, and understanding what users mean rather than only what they say. Kurbalija suggests hiring signals could shift accordingly, with employers valuing empathy and imagination, sometimes even seeing artistic or humanistic interests as evidence of stronger judgment in complex human environments.

Another pressure point is what he calls AI’s ‘paradox of plenty.’ If AI makes building easier, the harder question becomes what to build, what to prioritise, and what not to automate.

In that landscape, the scarce skill is not writing code quickly but framing the right problem, defining success, balancing trade-offs, and spotting where technology introduces new risks, especially in large organisations where ‘requirements’ can hide unresolved conflicts.

Kurbalija also argues that AI-era systems will be more interconnected and fragile, turning developers into orchestrators of complexity across services, APIs, agents, and vendors. When failures cascade or accountability becomes blurred, teams still need people who can design for resilience, privacy, and observability and who can keep systems understandable as tools and models change.

Some tasks, like debugging and security audits, may remain more human-led in the near term, even if that window narrows as AI improves.

Transformation of Diplo is presented as a practical case study of the broader shift. Kurbalija describes a move from a technology-led phase toward a more content and human-led approach, where the decisive factor is not which model is used but how well knowledge is prepared, labelled, evaluated, and embedded into workflows, and how effectively people adapt to constant change.

His bottom line is stark. Many developers will struggle, but those who build strong non-coding skills, communication, systems thinking, product judgment, and comfort with uncertainty may do exceptionally well in the new era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RCB to use AI cameras at Chinnaswamy Stadium for crowd management

The Royal Challengers Bengaluru (RCB) franchise has announced plans to install AI-enabled camera systems at M. Chinnaswamy Stadium in Bengaluru ahead of the upcoming Indian Premier League (IPL) season.

The AI cameras are intended to support stadium security teams by providing real-time crowd management, identifying high-density areas and aiding safer entry and exit flows.

The system will use computer vision and analytics to monitor spectators and alert authorities to potential bottlenecks or risks, helping security personnel intervene proactively. RCB officials say the technology is part of broader efforts to improve spectator experience and safety, particularly in large-crowd environments.

The move reflects the broader adoption of AI and video analytics tools in sports venues to enhance operational efficiency and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated song removed from Swedish rankings

Sweden has removed a chart-topping song from its official rankings after ruling it was mainly created using AI. The track had attracted millions of streams on Spotify within weeks.

Industry investigators found no public profile for the artist, later linking the song to executives at a music firm using AI tools. Producers insisted that technology merely assisted a human-led creative process.

Music organisations say AI-generated tracks threaten existing industry rules and creator revenues. The decision intensifies debate over how to regulate AI in cultural markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law schools urged to embed practical AI training in legal education

With AI tools now widely available to legal professionals, educators and practitioners argue that law schools should integrate practical AI instruction into curricula rather than leave students to learn informally.

The article describes a semester-long experiment in an Entrepreneurship Clinic where students were trained on legal AI tools from platforms such as Bloomberg Law, Lexis and Westlaw, with exercises designed to show both advantages and limitations of these systems.

In structured exercises, students used different AI products to carry out tasks like drafting, research and client communication, revealing that tools vary widely in capabilities and reinforcing the importance of independent legal judgement.

Educators emphasise that AI should be taught as a complement to legal reasoning, not a substitute, and that understanding how and when to verify AI outputs is essential for responsible practice.

The article concludes that clarifying the distinction between AI as a tool and as a crutch will help prepare future lawyers to use technology ethically and competently in both transactional work and litigation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft urges systems approach to AI skills in Europe

AI is increasingly reshaping European workplaces, though large-scale job losses have not yet materialised. Studies by labour bodies show that tasks change faster than roles disappear.

Policymakers and employers face pressure to expand AI skills while addressing unequal access to them. Researchers warn that the benefits and risks concentrate among already skilled workers and larger organisations.

Education systems across Europe are beginning to integrate AI literacy, including teacher training and classroom tools. Progress remains uneven between countries and regions.

Microsoft experts say workforce readiness will depend on evidence-based policy and sustained funding. Skills programmes alone may not offset broader economic and social disruption from AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot