Agentic AI gains ground as GenAI maturity grows in public sector

Public sector organisations around the world are rapidly moving beyond experimentation with generative AI (GenAI), with up to 90% now planning to explore, pilot, or implement agentic AI systems within the next two years.

Capgemini’s latest global survey of 350 public sector agencies found that most already use or trial GenAI, while agentic AI is being recognised as the next step — enabling autonomous, goal-driven decision-making with minimal human input.

Unlike GenAI, which generates content subject to human oversight, agentic AI can act independently, creating new possibilities for automation and public service delivery.

Dr Kirti Jain of Capgemini explained that GenAI depends on human-in-the-loop (HITL) processes, where users review outputs before acting. By contrast, agentic AI completes the final step itself, representing a future phase of automation. However, data governance remains a key barrier to adoption.

Data sovereignty emerged as a leading concern for 64% of surveyed public sector leaders. Fewer than one in four said they had sufficient data to train reliable AI systems. Dr Jain emphasised that governance must be embedded from the outset — not added as an afterthought — to ensure data quality, accountability, and consistency in decision-making.

A proactive approach to governance offers the only stable foundation for scaling AI responsibly. Managing the full data lifecycle — from acquisition and storage to access and application — requires strict privacy and quality controls.

Significant risks arise when flawed AI-generated insights influence decisions affecting entire populations. Capgemini’s support for government agencies focuses on three areas: secure infrastructure, privacy-led data usability, and smarter, citizen-centric services.

EPA Victoria CTO Abhijit Gupta underscored the need for timely, secure, and accessible data as a prerequisite for AI in the public sector. Accuracy and consistency, Dr Jain noted, are essential whether outcomes are delivered by humans or machines. Governance, he added, should remain technology-agnostic yet agile.

Strong data foundations require only minor adjustments to scale agentic AI that can manage full decision-making cycles. Capgemini’s model of ‘active data governance’ aims to enable public sector AI to scale safely and sustainably.

Singapore was highlighted as a leading example of responsible innovation, driven by rapid experimentation and collaborative development. The AI Trailblazers programme, co-run with the private sector, is tackling over 100 real-world GenAI challenges through a test-and-iterate model.

Minister for Digital Josephine Teo recently reaffirmed Singapore’s commitment to sharing lessons and best practices in sustainable AI development. According to Dr Jain, the country’s success lies not only in rapid adoption, but in how AI is applied to improve services for citizens and society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Human rights must anchor crypto design

Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.

Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.

The collapse of platforms like FTX has only strengthened calls for human-centric solutions.

New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.

Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.

While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stay True To The Act campaign defends music rights

More than 30 European musicians have launched a united video campaign urging the European Commission to preserve the integrity of the EU AI Act.

The Stay True To The Act campaign calls on policymakers to enforce transparency and uphold copyright protections.

Artists, including Spanish singer-songwriter Álex Ubago and Poland’s Eurovision 2025 entrant Justyna Steczkowska, have voiced concern over the unauthorised use of their work to train AI models. They demand the right to be informed and the power to refuse such usage.

The EU AI Act, passed in 2024, includes provisions requiring developers to disclose the content used in AI training. However, as implementation plans develop, artists fear the law may be diluted, weakening protections for creators.

The campaign appeals for vigorous enforcement of the Act’s original principles: transparency, copyright control and fair innovation. Artists say AI and music can coexist in Europe only if ethical boundaries are upheld.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI now powers 20% of new Steam games

Nearly 20 percent of video games released on Steam in 2025 include generative AI, according to a new report by Totally Human Media.

The report, based on data gathered from Steam, states that around 7,818 games currently disclose using generative AI. The figure represents roughly 7 percent of the platform’s entire catalogue. For games launched in 2025, nearly one in five incorporate AI tools or openly disclose doing so.

Compared to 2024, this marks a nearly 700 percent increase in generative AI adoption, reflecting a broader industry trend towards automation and machine-generated content.

Among the most prominent titles is My Summer Car, a vehicle simulation game with over 2.5 million copies sold. The developers disclosed that the game includes ‘some AI generated paintings found inside the main house’.

Valve, the company behind Steam, began requiring game developers to disclose AI use in January 2024. While the company did not comment on the findings, the policy has enabled public tracking of AI adoption across the platform.

Community reaction to the trend has been mixed. On Reddit, many users said they would automatically add AI-driven games to their ignore lists. One commenter wrote, ‘We need to tag them so they can be an ignored category.’ Others expressed disappointment in indie developers turning to generative tools over human artists.

Some users acknowledged the complexity of the issue. A typical comment noted that while AI usage in minor elements like UI assets might be acceptable, reliance on AI for core content raises questions about value and originality. One post read, ‘What am I paying for if it’s all AI? I agree with that sentiment.’

Steam’s Next Fest, which showcases upcoming releases, drew criticism from some players who said they lost interest in promising titles upon discovering their use of generative AI.

Despite user backlash, industry momentum continues to build. Many developers see AI as a means to streamline asset creation and reduce production costs, though concerns about quality, ethics, and employment remain central to the debate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts link Qantas data breach to AI voice impersonation

Cybersecurity experts believe criminals may have used AI-generated voice deepfakes to breach Qantas systems, potentially deceiving contact centre staff in Manila. The breach affected nearly six million customers, with links to a group known as Scattered Spider.

Qantas confirmed the breach after detecting suspicious activity on a third-party platform. Stolen data included names, phone numbers, and addresses—but no financial details. The airline has not confirmed whether voice impersonation was involved.

Experts point to Scattered Spiders’ history of using synthetic voices to trick help desk staff into handing over credentials. Former FBI agent Adam Marré said the technique, known as vishing, matches the group’s typical methods and links them to The Com, a cybercrime collective.

Other members of The Com have targeted companies like Salesforce through similar tactics. Qantas reportedly warned contact centre staff shortly before the breach, citing a threat advisory connected to Scattered Spider.

Google and CrowdStrike reported that the group frequently impersonates employees over the phone to bypass multi-factor authentication and reset passwords. The FBI has warned that Scattered Spider is now targeting airlines.

Qantas says its core systems remain secure and has not confirmed receiving a ransom demand. The airline is cooperating with authorities and urging affected customers to watch for scams using their leaked information.

Cybersecurity firm Trend Micro notes that voice deepfakes are now easy to produce, with convincing audio clips available for as little as $5. The deepfakes can mimic language, tone, and emotion, making them powerful tools for deception.

Experts recommend biometric verification, synthetic signal detection, and real-time security challenges to counter deepfakes. Employee training and multi-factor authentication remain essential defences.

Recent global cases illustrate the risk. In one instance, a deepfake mimicking US Senator Marco Rubio attempted to access sensitive systems. Other attacks involved cloned voices of US political figures Joe Biden and Susie Wiles.

As voice content becomes more publicly available, experts warn that anyone sharing audio online could become a target for AI-driven impersonation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea’s new Science Minister pledges AI-led national transformation

South Korea’s new Science and ICT Minister, Bae Kyung-hoon, has pledged to turn the nation into one of the world’s top three AI powerhouses.

Instead of following outdated methods, Bae outlined a bold national strategy centred on AI, science and technology, aiming to raise Korea’s potential growth rate to 3 per cent and secure a global economic leadership position.

Bae, a leading AI expert and former president of LG AI Research, officially assumed office on Thursday.

Drawing from experience developing hyperscale AI models like LG’s Exaone, he emphasised the need to build a unique competitive advantage rooted in AI transformation, talent development and technological innovation.

Rather than focusing only on industrial growth, Bae’s policy agenda targets a broad AI ecosystem, revitalised research and development, world-class talent nurturing, and addressing issues affecting daily life.

His plans include establishing AI-centred universities, enhancing digital infrastructure, promoting AI semiconductors, restoring grassroots research funding, and expanding consumer rights in telecommunications.

With these strategies, Bae aims to make AI accessible to all citizens instead of limiting it to large corporations or research institutes. His vision is for South Korea to lead in AI development while supporting social equity, cybersecurity, and nationwide innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania criminalises malicious deepfakes under new digital forgery law

Governor Shapiro has enacted a new statute enhancing Pennsylvania’s legal stance on AI-generated content by defining deceptive deepfakes as digital forgery.

The law criminalises creating and distributing such content, mainly when used for deceit, highlighting a proactive response to deepening online threats.

The legislation differentiates between uses of deepfakes: non-consensual impersonation will result in misdemeanour charges, while cases involving fraudulent intent, such as financial scams or political manipulation, are now classified as third-degree felonies.

Support for the bill was bipartisan and overwhelming in the state legislature. Its sponsors emphasised that while it deters harmful digital impersonation, it also carefully safeguards legitimate speech, including parody, satire, and artistic expression.

With Pennsylvania now among the growing number of states implementing deepfake regulations, this development aligns with a national trend to regulate AI-generated digital forgeries. It complements earlier state-level laws and federal initiatives to curb AI’s misuse without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!