IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

EU urged to pause AI act rollout

The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions.

The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030.

CCIA Europe calls for the EU heads of state to instruct a pause on implementation to ensure companies have time to comply. Such a delay would allow final standards to be established, offering developers clarity and supporting AI competitiveness.

Failure to adjust the timeline could leave Europe struggling to lead in AI, according to CCIA Europe’s leadership. A rushed approach, they argue, risks harming the very innovation the AI Act aims to promote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Infosys chairman warns of global risks from tariffs and AI

Infosys chairman Nandan Nilekani has warned of mounting global uncertainty driven by tariff wars, AI and the ongoing energy transition.

At the company’s 44th annual general meeting, he urged businesses to de-risk sourcing and diversify supply chains as geopolitical trade tensions reshape global commerce.

He described a ‘perfect storm’ of converging challenges pushing the world away from a single global market and towards fragmented trade blocs. As firms navigate the shift, they must choose between regions and adopt more strategic, resilient supply networks.

Addressing AI, Nilekani acknowledged the disruption it may bring to the workforce but framed it as an opportunity for digital transformation. He said Infosys is investing in both ‘AI foundries’ for innovation and ‘AI factories’ for scale, with over 275,000 employees already trained in AI technologies.

Energy transition was also flagged as a significant uncertainty, as the future depends on breakthroughs in renewable sources like solar, wind and hydrogen. Nilekani stressed that all businesses now face rapid technological and operational change before they can progress confidently into an unpredictable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google releases free Gemini CLI tool for developers

Google has introduced Gemini CLI, a free, open-source AI tool that connects developers directly to its Gemini AI models. The new agentic utility allows developers to request debugging, generate code, and run commands using natural language within their terminal environment.

Built as a lightweight interface, Gemini CLI provides a streamlined way to interact with Gemini. While its coding features stand out, Google says the tool handles content creation, deep research, and complex task management across various workflows.

Gemini CLI uses Gemini 2.5 Pro for coding and reasoning tasks by default. Still, it can also connect to other AI models, such as Imagen and Veo, for image and video generation. It supports the Model Context Protocol (MCP) and integrates with Gemini Code Assist.

Moreover, the tool is available on Windows, MacOS, and Linux, offering developers a free usage tier. Access through Vertex AI or AI Studio is available on a pay-as-you-go basis for advanced setups involving multiple agents or custom models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI drives fall in graduate jobs

According to new figures from Indeed, AI adoption across industries has contributed to a steep drop in graduate job listings. The jobs platform reported a one-third fall in advertised roles for recent graduates, the lowest level seen in almost a decade.

Major professional services firms have significantly scaled back their graduate intakes in response to shifting labour demands. KPMG, Deloitte, EY and PwC all reported reductions, with KPMG cutting its graduate cohort by a third.

The UK government has pledged to improve the nation’s AI skills through partnerships to upskill 7.5 million workers. Prime Minister Keir Starmer announced the plan during London Tech Week as part of efforts to prepare for an AI-driven economy.

Concerns over AI replacing human roles were highlighted in a controversial ad campaign by Californian firm Artisan, which sparked complaints to the UK’s Advertising Standards Authority. The campaign’s slogan urged companies to stop hiring humans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bosch calls for balanced AI rules in Europe

Bosch CEO Stefan Hartung has cautioned that Europe could slow its progress in AI by imposing too many regulations. Speaking at a tech conference in Stuttgart, he argued that strict and unclear rules make the region less attractive for innovation.

Bosch, which holds the most significant number of AI patents in Europe, plans to invest 2.5 billion euros in AI development by the end of 2027. The company is focusing on AI solutions for autonomous vehicles and industrial efficiency.

Hartung urged lawmakers to focus on essential regulations rather than attempting to control every aspect of technological progress. He warned that over-regulation could hinder Europe’s global competitiveness, particularly as the US and the EU ramp up AI investments.

The warning follows significant funding announcements, with the US committing up to 500 billion dollars and the EU planning to mobilise 200 billion euros for AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

HPE unveils private cloud AI platform featuring Nvidia Blackwell chips

Hewlett Packard Enterprise (HPE) and Nvidia have unveiled new AI factory solutions to accelerate AI adoption across multiple sectors.

Announced at HPE Discover in Las Vegas, the new offerings include modular AI factory infrastructure, AI-ready RTX PRO servers (HPE ProLiant Compute DL380a Gen12), and the next iteration of HPE’s turnkey platform, HPE Private Cloud AI.

The portfolio combines Nvidia’s Blackwell accelerated computing, Spectrum-X Ethernet, and BlueField-3 networking with Nvidia AI Enterprise software and HPE’s hardware, software, and services. The result is a modular, pre-integrated infrastructure stack intended to simplify AI deployment at scale.

HPE’s OpsRamp Software, a validated observability solution for Nvidia’s Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration are also part of the integrated platform.

A key component is the next-generation HPE Private Cloud AI, jointly developed by HPE and Nvidia. It includes ProLiant DL380a Gen12 servers featuring Nvidia RTX PRO 6000 Blackwell Server Edition GPUs, supporting various enterprise and industrial AI applications. These systems are now available for order.

The platform also supports Nvidia AI Blueprints, such as the AI-Q Blueprint, for AI agent creation and workflow management.

HPE additionally announced the Compute XD690, a new Nvidia HGX B300 system powered by Nvidia Blackwell Ultra GPUs, expected to ship in October 2025.

International collaborations are part of the strategy. HPE is partnering with Japanese telecom provider KDDI to build AI infrastructure at the KDDI Osaka Sakai Data Centre using Nvidia’s GB200 NVL72 platform, based on the Grace Blackwell architecture.

In financial services, HPE is working with Accenture to test agentic AI workflows via Accenture’s AI Refinery, leveraging HPE Private Cloud AI for procurement, sourcing, and risk analysis.

Security and governance features have also been emphasised, including air-gapped management, multi-tenancy support, and post-quantum cryptography.

As part of its broader ecosystem expansion, HPE has added 26 partners to its ‘Unleash AI’ initiative, offering more than 70 packaged AI workloads covering video analytics, fraud detection, cybersecurity, and sovereign AI.

To support enterprise adoption, HPE and Nvidia have launched AI Acceleration Workshops aimed at helping organisations scale AI implementations.

Separately, Nvidia recently collaborated with Deutsche Telekom to launch Europe’s first industrial AI cloud in Germany, designed to support the manufacturing sector with applications in engineering, simulation, digital twins, and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools at work pose hidden dangers

AI tools are increasingly used in workplaces to enhance productivity but come with significant security risks. Workers may unknowingly breach privacy laws like GDPR or HIPAA by sharing sensitive data with AI platforms, risking legal penalties and job loss.

Experts warn of AI hallucinations where chatbots generate false information, highlighting the need for thorough human review. Bias in AI outputs, stemming from flawed training data or system prompts, can lead to discriminatory decisions and potential lawsuits.

Cyber threats like prompt injection and data poisoning can manipulate AI behaviour, while user error and IP infringement pose further challenges. As AI technology evolves, unknown risks remain a concern, making caution essential when integrating AI into business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!