Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Taiwan leads in AI election defence efforts

Taiwan has been chosen to lead a new coalition formed by the International Foundation for Electoral Systems to strengthen democratic resilience against AI-driven disinformation. The AI Advisory Group on Elections will unite policymakers and experts to address AI’s role in protecting fair elections.

The island’s experience has made it a key voice in global AI governance as it counters sophisticated disinformation campaigns linked to authoritarian regimes. Taiwan’s Cyber Ambassador, Audrey Tang, stressed that AI must serve the greater good and help build accountable digital societies.

Taiwan has developed rapid response and civic fact-checking tools that many democracies now look to adopt. These measures helped ensure the integrity of its recent elections despite unprecedented levels of AI-generated disinformation and cyberattacks.

Global democracies are urged to learn from Taiwan’s playbook as threats evolve, and the influence of AI on elections grows. Taiwan’s success shows that resilience can be achieved without sacrificing civil liberties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Advancing digital identity in Africa while safeguarding sovereignty

A pivotal discussion on digital identity and sovereignty in developing countries unfolded at the Internet Governance Forum 2025 in Norway.

The session, co-hosted by CityHub and AFICTA (Africa ICT Alliance), brought together experts from Africa, Asia, and Europe to explore how digital identity systems can foster inclusion, support cross-border services, and remain anchored in national sovereignty.

Speakers emphasised that digital identity is foundational for bridging the digital divide and fostering economic development. Dr Jimson Olufuye, Chair of AFICTA, stressed the existential nature of identity in the digital age, noting, ‘If you cannot identify anybody, it means the person does not exist.’ He linked identity inclusion directly to the World Summit on the Information Society (WSIS) action lines and the Global Digital Compact goals.

IGF

Several national examples were presented. From Nigeria, Abisoye Coker-Adusote, Director General of the National Identity Management Commission (NIMC), shared how the country’s National Identification Number (NIN) has been integrated into banking, education, telecoms, and census services. ‘We’ve linked NINs from birth to ensure lifelong digital access,’ she noted, adding that biometric verification now underpins school enrolments, student loans, and credit programmes.

Representing Benin, Dr Kossi Amessinou highlighted the country’s ‘It’s Me’ card, a digital ID facilitating visa-free travel within ECOWAS. He underscored the importance of data localisation, asserting, ‘Data centres should be located within Africa to maintain sovereignty.’

Technical insights came from Debora Comparin, co-founder of CityHub, and Naohiro Fujie, Chair of the OpenID Foundation Japan. Comparison called for preserving the privacy characteristics of physical documents in digital forms and stressed the need for legal harmonisation to build trust across borders.

No digital identity system can work without mutual trust and clarity on issuance procedures,’ she said. Fujie shared Japan’s experience transitioning to digital credentials, including the country’s recent rollout of national ID cards via Apple Wallet, noting that domestic standards should evolve with global interoperability in mind.

Tor Alvik, from Norway’s Digitisation Agency, explained how cross-border digital identity remains a challenge even among closely aligned Nordic countries. ‘The linkage of a person’s identity between two systems is one of the hardest problems,’ he admitted, describing Norway’s regional interoperability efforts through the EU’s eIDAS framework.

Panelists agreed on key themes: digital identities must be secure, inclusive, and flexible to accommodate countries at varying digital readiness levels. They also advocated for federated data systems that protect sovereignty while enabling cooperation. Dr Olufuye proposed forming regional working groups to assess interoperability frameworks and track progress between IGF sessions.

As a forward step, several pilot programmes were proposed—pairing countries like Nigeria with neighbours Cameroon or Niger—to test cross-border digital ID systems. These initiatives, supported by tools and frameworks from CityHub, aim to lay the groundwork for a truly interoperable digital identity landscape across Africa and beyond.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Gemini Robotics On-Device: Google’s AI model for offline robotic tasks

On Tuesday, 24 June, Google’s DeepMind division announced the release of a new large language model named Gemini Robotics On-Device, designed to operate locally on robotic systems.

In a blog post, the company stated that the AI model has been optimised to function efficiently on-device and demonstrates strong general-purpose dexterity and task generalisation capabilities.

The offline model is an advancement of the earlier Gemini Robotics system introduced in March this year. Unlike cloud-based models, this version can operate offline, making it suitable for limited connectivity or critical latency.

Engineered for robots with dual arms, Gemini Robotics On-Device is designed to require minimal computational resources.

It can execute fine motor tasks such as folding garments and unzipping bags. According to Google, the model responds to natural language prompts, enabling more intuitive human-robot interaction.

The company claims the model outperforms comparable on-device alternatives, especially when completing complex, multi-step instructions or handling unfamiliar tasks. Benchmark results indicate that its performance closely approaches that of Google’s cloud-based AI solutions.

Initially developed for ALOHA robots, the on-device model has since been adapted for other systems, including the bi-arm Franka FR3 robot and the Apollo humanoid.

On the Franka FR3, the model followed diverse instructions and managed unfamiliar objects and environments, including industrial tasks like belt assembly. The system demonstrated general object manipulation in previously unseen contexts on the Apollo humanoid.

Developers interested in trialling Gemini Robotics On-Device can access it via the provided software development kit (SDK).

Google joins other major players exploring AI for robotics. At GTC 2025, NVIDIA introduced Groot N1, an AI system for humanoid robots, while Hugging Face is currently developing its own open-source, AI-powered robotic platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

SpaceX rocket carries first quantum satellite into space

A groundbreaking quantum leap has taken place in space exploration. The world’s first photonic quantum computer has successfully entered orbit aboard SpaceX’s Transporter 14 mission.

Launched from Vandenberg Space Force Base in California on 23 June, the quantum device was developed by an international research team led by physicist Philip Walther of the University of Vienna.

The miniature quantum computer, designed to withstand harsh space conditions, is now orbiting 550 kilometres above Earth. It was part of a 70-payload cargo, including microsatellites and re-entry capsules.

Uniquely, the system performs ‘edge computing’, processing data like wildfire detection directly on board rather than transmitting raw information to Earth. The innovation drastically reduces energy use and improves response time.

Assembled in just 11 working days by a 12-person team at the German Aerospace Center in Trauen, the quantum processor is expected to transmit its first results within a week of reaching orbit.

The project’s success marks a significant milestone in quantum space technology, opening the door to further experiments in fundamental physics and applied sciences.

The Transporter 14 mission also deployed satellites from Capella Space, Starfish Space, and Varda Space, among others. Following its 26th successful flight, the Falcon 9 rocket safely landed on a Pacific Ocean platform after a nearly two-hour satellite deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!