OpenAI leadership battles talent exodus

OpenAI is scrambling to retain its top researchers after Meta launched a bold recruitment drive. Chief Research Officer Mark Chen likened the situation to a break-in at home and reassured staff that leadership is actively addressing the issue.

Meta has reportedly offered signing bonuses of up to $100 million to entice senior OpenAI staff. Chen and CEO Sam Altman have responded by reviewing compensation packages and exploring creative retention incentives, assuring fairness in the process.

The recruitment push comes as Meta intensifies efforts in AI, investing heavily in its superintelligence lab and targeting experts from OpenAI, Google DeepMind, and Scale AI.

OpenAI has encouraged staff to resist pressure to make quick decisions, especially during its scheduled recharge week, emphasising the importance of the broader mission over short-term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gartner warns that more than 40 percent of agentic AI projects could be cancelled by 2027

More than 40% of agentic AI projects will likely be cancelled by the end of 2027 due to rising costs, limited business value, and poor risk control, according to research firm Gartner.

These cancellations are expected as many early-stage initiatives remain trapped in hype, often misapplied and far from ready for real-world deployment.

Gartner analyst Anushree Verma warned that most agentic AI efforts are still at the proof-of-concept stage. Instead of focusing on scalable production, many companies have been distracted by experimental use cases, underestimating the cost and complexity of full-scale implementation.

A recent poll by Gartner found that only 19% of organisations had made significant investments in agentic AI, while 31% were undecided or waiting.

Much of the current hype is fuelled by vendors engaging in ‘agent washing’ — marketing existing tools like chatbots or RPA under a new agentic label without offering true agentic capabilities.

Out of thousands of vendors, Gartner believes only around 130 offer legitimate agentic solutions. Verma noted that most agentic models today lack the intelligence to deliver strong returns or follow complex instructions independently.

Still, agentic AI holds long-term promise. Gartner expects 15% of daily workplace decisions to be handled autonomously by 2028, up from zero in 2024. Moreover, one-third of enterprise applications will include agentic capabilities by then.

However, to succeed, organisations must reimagine workflows from the ground up, focusing on enterprise-wide productivity instead of isolated task automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top OpenAI researcher for AI superintelligence push

Meta has reportedly hired AI researcher Trapit Bansal, who previously worked closely with OpenAI co-founder Ilya Sutskever on reinforcement learning and co-created the o1 reasoning model.

Bansal joins Meta’s ambitious superintelligence team, which is focused on further pushing AI reasoning capabilities.

Former Scale AI CEO Alexandr Wang leads the new team, brought in after Meta invested $14.3 billion in the AI data labelling company.

Alongside Bansal, several other notable figures have recently joined, including three OpenAI researchers from Zurich, a former Google DeepMind expert, Jack Rae, and a senior machine learning lead from Sesame AI.

Meta CEO Mark Zuckerberg is accelerating AI recruitment by negotiating with prominent names like former GitHub CEO Nat Friedman and Safe Superintelligence co-founder Daniel Gross.

Despite these aggressive efforts, OpenAI CEO Sam Altman revealed that even $100 million joining bonuses have failed to lure key staff away from his firm.

Zuckerberg has also explored acquiring startups such as Sutskever’s Safe SuperIntelligence and Perplexity AI, further highlighting Meta’s urgency in catching up in the generative AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BT report shows rise in cyber attacks on UK small firms

A BT report has found that 42% of small businesses in the UK suffered a cyberattack in the past year. The study also revealed that 67% of medium-sized firms were targeted, while many lacked basic security measures or staff training.

Phishing was named the most common threat, hitting 85% of businesses in the UK, and ransomware incidents have more than doubled. BT’s new training programme aims to help SMEs take practical steps to reduce risks, covering topics like AI threats, account takeovers and QR code scams.

Tris Morgan from BT highlighted that SMEs face serious risks from cyber attacks, which could threaten their survival. He stressed that security is a necessary foundation and can be achieved without vast resources.

The report follows wider warnings on AI-enabled cyber threats, with other studies showing that few firms feel prepared for these risks. BT’s training is part of its mission to help businesses grow confidently despite digital dangers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

EU urged to pause AI act rollout

The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions.

The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030.

CCIA Europe calls for the EU heads of state to instruct a pause on implementation to ensure companies have time to comply. Such a delay would allow final standards to be established, offering developers clarity and supporting AI competitiveness.

Failure to adjust the timeline could leave Europe struggling to lead in AI, according to CCIA Europe’s leadership. A rushed approach, they argue, risks harming the very innovation the AI Act aims to promote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Infosys chairman warns of global risks from tariffs and AI

Infosys chairman Nandan Nilekani has warned of mounting global uncertainty driven by tariff wars, AI and the ongoing energy transition.

At the company’s 44th annual general meeting, he urged businesses to de-risk sourcing and diversify supply chains as geopolitical trade tensions reshape global commerce.

He described a ‘perfect storm’ of converging challenges pushing the world away from a single global market and towards fragmented trade blocs. As firms navigate the shift, they must choose between regions and adopt more strategic, resilient supply networks.

Addressing AI, Nilekani acknowledged the disruption it may bring to the workforce but framed it as an opportunity for digital transformation. He said Infosys is investing in both ‘AI foundries’ for innovation and ‘AI factories’ for scale, with over 275,000 employees already trained in AI technologies.

Energy transition was also flagged as a significant uncertainty, as the future depends on breakthroughs in renewable sources like solar, wind and hydrogen. Nilekani stressed that all businesses now face rapid technological and operational change before they can progress confidently into an unpredictable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google releases free Gemini CLI tool for developers

Google has introduced Gemini CLI, a free, open-source AI tool that connects developers directly to its Gemini AI models. The new agentic utility allows developers to request debugging, generate code, and run commands using natural language within their terminal environment.

Built as a lightweight interface, Gemini CLI provides a streamlined way to interact with Gemini. While its coding features stand out, Google says the tool handles content creation, deep research, and complex task management across various workflows.

Gemini CLI uses Gemini 2.5 Pro for coding and reasoning tasks by default. Still, it can also connect to other AI models, such as Imagen and Veo, for image and video generation. It supports the Model Context Protocol (MCP) and integrates with Gemini Code Assist.

Moreover, the tool is available on Windows, MacOS, and Linux, offering developers a free usage tier. Access through Vertex AI or AI Studio is available on a pay-as-you-go basis for advanced setups involving multiple agents or custom models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!