5G Advanced lays the groundwork for 6G, says 5G Americas

5G Americas has released a new white paper outlining how 5G Advanced features in 3GPP Releases 18 to 20 are shaping the path to 6G.

The report highlights how 5G Advanced is evolving mobile networks through embedded AI, scaled IoT, improved energy efficiency, and broader service capabilities. Viet Nguyen, President of 5G Americas, called it a turning point for wireless systems, offering more intelligent, resilient, and sustainable connectivity.

AI-native networking is a key innovation which brings machine learning into the radio and core network. The innovation enables zero-touch automation, predictive maintenance, and self-organising systems, cutting fault detection by 90% and reducing false alarms by 70%.

Energy efficiency is another core benefit. Features like cell sleep modes and antenna switching can reduce energy use by up to 56%. Ambient IoT also advances, enabling battery-less devices for industrial and consumer use in energy-constrained environments.

Latency improvements like L4S and enhanced QoS allow scalable support for immersive XR and real-time automation. Advances in spectral efficiency and satellite support are boosting uplink speeds above 500 Mbps and expanding coverage to remote areas.

Andrea Brambilla of Nokia noted that 5G Advanced supports digital twins, private networks, and AI-driven transformation. Pei Hou of T-Mobile said it builds on 5G Standalone to prepare for a sustainable shift to 6G.

The paper urges updated policies on AI governance, spectrum sharing, and IoT standards to ensure global interoperability. Strategic takeaways include AI, automation, and energy savings as key to long-term innovation and monetisation across the public and private sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SatanLock ends operation amid ransomware ecosystem turmoil

SatanLock, a ransomware group active since April 2025, has announced it is shutting down. The group quickly gained notoriety, claiming 67 victims on its now-defunct dark web leak site.

Cybersecurity firm Check Point says more than 65% of these victims had already appeared on other ransomware leak pages. However, this suggests the group may have used shared infrastructure or tried to hijack previously compromised networks.

Such tactics reflect growing disorder within the ransomware ecosystem, where victim double-posting is rising. SatanLock may have been part of a broader criminal network, as it shares ties to families like Babuk-Bjorka and GD Lockersec.

A shutdown message was posted on the gang’s Telegram channel and leak page, announcing plans to leak all stolen data. The reason for the sudden closure has not been disclosed.

Another group, Hunters International, announced its disbandment just days earlier.

Unlike SatanLock, Hunters offered free decryption keys to its victims in a parting gesture.

These back-to-back exits signal possible pressure from law enforcement, rivals, or internal collapse in the ransomware world. Analysts are watching closely to see whether this trend continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Internet Governance Forum marks 20 years of reshaping global digital policy

The 2025 Internet Governance Forum (IGF), held in Norway, offered a deep and wide-ranging reflection on the IGF’s 20-year journey in shaping digital governance and its prospects for the future.

Bringing together voices from governments, civil society, the technical community, business, and academia, the session celebrated the IGF’s unique role in institutionalising a multistakeholder approach to internet policymaking, particularly through inclusive and non-binding dialogue.

Moderated by Avri Doria, who has been with the IGF since its inception, the session focused on how the forum has influenced individuals, governments, and institutions across the globe. Doria described the IGF as a critical learning platform and a ‘home for evolving objectives’ that has helped connect people with vastly different viewpoints over the decades.

Professor Bitange Ndemo, Ambassador of Kenya to the European Union, reflected on his early scepticism, admitting that stakeholder consultation initially felt ‘painful’ for policymakers unfamiliar with collaborative approaches.

Over time, however, it proved ‘much, much easier’ for implementation and policy acceptance. ‘Thank God it went the IGF way,’ he said, emphasising how early IGF discussions guided Kenya and much of Africa in building digital infrastructure from the ground up.

Hans Petter Holen, Managing Director of RIPE NCC, underlined the importance of the IGF as a space where ‘technical realities meet policy aspirations’. He called for a permanent IGF mandate, stressing that uncertainty over its future limits its ability to shape digital governance effectively.

Renata Mielli, Chair of the Internet Steering Committee of Brazil (CGI.br), spoke about how IGF-inspired dialogue was key to shaping Brazil’s Internet Civil Rights Framework and Data Protection Law. ‘We are not talking about an event or a body, but an ecosystem,’ she said, advocating for the IGF to become the focal point for implementing the UN Global Digital Compact.

Funke Opeke, founder of MainOne in Nigeria, credited the IGF with helping drive West Africa’s digital transformation. ‘When we launched our submarine cable in 2010, penetration was close to 10%. Now it’s near 50%,’ she noted, urging continued support for inclusion and access in the Global South.

Qusai Al Shatti, from the Arab IGF, highlighted how the forum helped embed multistakeholder dialogue into governance across the Arab world, calling the IGF ‘the most successful outcome of WSIS‘.

From the civil society perspective, Chat Garcia Ramilo of the Association for Progressive Communications (APC) described the IGF as a platform to listen deeply, to speak, and, more importantly, to act’. She stressed the forum’s role in amplifying marginalised voices and pushing human rights and gender issues to the forefront of global internet policy.

Luca Belli of FGV Law School in Brazil echoed the need for better visibility of the IGF’s successes. Despite running four dynamic coalitions, he expressed frustration that many contributions go unnoticed. ‘We’re not good at celebrating success,’ he remarked.

Isabelle Lois, Vice Chair of the UN Commission on Science and Technology for Development (CSTD), emphasised the need to ‘connect the IGF to the wider WSIS architecture’ and ensure its outcomes influence broader UN digital frameworks.

Other voices joined online and from the floor, including Dr Robinson Sibbe of Digital Footprints Nigeria, who praised the IGF for contextualising cybersecurity challenges, and Emily Taylor, a UK researcher, who noted that the IGF had helped lay the groundwork for key initiatives like the IANA transition and the proliferation of internet exchange points across Africa.

Youth participants like Jasmine Maffei from Hong Kong and Piu from Myanmar stressed the IGF’s openness and accessibility. They called for their voices to be formally recognised within the multistakeholder model.

Veteran internet governance leader Markus Kummer reminded the room that the IGF’s ability to build trust and foster dialogue across divides enabled global cooperation during crucial events like the IANA transition.

Despite the celebratory tone, speakers repeatedly stressed three urgent needs: a permanent IGF mandate, stronger integration with global digital governance efforts such as the WSIS and Global Digital Compact, and broader inclusion of youth and underrepresented regions.

As the forum entered its third decade, many speakers agreed that the IGF’s legacy lies in its meetings or declarations and the relationships, trust, and governance culture it has helped create. The message from Norway was clear: in a fragmented and rapidly changing digital world, the IGF is more vital than ever—and its future must be secured.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Onnuri Church probes hack after broadcast hijacked by North Korean flag

A North Korean flag briefly appeared during a live-streamed worship service from one of Seoul’s largest Presbyterian churches, prompting an urgent investigation into what church officials are calling a cyberattack.

The incident occurred Wednesday morning during an early service at Onnuri Church’s Seobinggo campus in Yongsan, South Korea.

While Pastor Park Jong-gil was delivering his sermon, the broadcast suddenly cut to a full-screen image of the flag of North Korea, accompanied by unidentified background music. His audio was muted during the disruption, which lasted around 20 seconds.

The unexpected clip appeared on the church’s official YouTube channel and was quickly captured by viewers, who began sharing it across online platforms and communities.

On Thursday, Onnuri Church issued a public apology on its website and confirmed it was treating the event as a deliberate cyber intrusion.

‘An unplanned video was transmitted during the livestream of our early morning worship on 18 June. We believe this resulted from a hacking incident,’ the statement read. ‘An internal investigation is underway, and we are taking immediate measures to identify the source and prevent future breaches.’

A church official told Yonhap News Agency that the incident had been reported to the relevant authorities, and no demands or threats had been received regarding the breach. The investigation continues as the church works with authorities to determine the origin and intent of the attack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oakley unveils smart glasses featuring Meta technology

Meta has partnered with Oakley to launch a new line of smart glasses designed for active lifestyles. The flagship model, Oakley Meta HSTN, will be available for preorder from 11 July for $499.

Additional Oakley models featuring Meta’s innovative technology are set to launch later in the summer, starting at $399.

https://twitter.com/1Kapisch/status/1936045567626617315

The glasses include a front-facing camera, open-ear speakers, and microphones embedded in the frame, much like the Meta Ray-Bans. When paired with a smartphone, users can listen to music, take calls, and interact with Meta AI.

With built-in cameras and microphones, Meta AI can also describe surroundings, answer visual questions, and translate languages.

With their sleek, sports-ready design and IPX4 water resistance, the glasses are geared toward athletes. They offer 8 hours of battery life—twice that of the Meta Ray-Bans—and come with a charging case that extends usage to 48 hours. Video capture quality has also improved, now supporting 3K resolution.


Customers can choose from five frame and lens combinations with prescription lenses for an added cost. Colours include warm grey, black, brown smoke, and clear, while lens options include Oakley’s PRIZM and transitions.

The $499 limited-edition version features gold accents and gold PRIZM lenses. Sales will cover major markets across North America, Europe, and Australia.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!


UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazilian telcos to push back on network fee ban

Brazilian telecom operators strongly oppose a bill that would ban charging network fees to big tech companies, arguing that these companies consume most of the network traffic, about 80% of mobile and 55% of fixed usage. The telcos propose a compromise where big techs either pay for usage above a set threshold or contribute a portion of their revenues to help fund network infrastructure expansion.

While internet companies claim they already invest heavily in infrastructure such as submarine cables and content delivery networks, telcos view the bill as unconstitutional economic intervention but prefer to reach a negotiated agreement rather than pursue legal battles. In addition, telcos are advocating for the renewal of existing tax exemptions on Internet of Things (IoT) devices and connectivity fees, which are set to expire in 2025.

These exemptions have supported significant growth in IoT applications across sectors like banking and agribusiness, with non-human connections such as sensors and payment machines now driving mobile network growth more than traditional phone lines. Although the federal government aims to reduce broad tax breaks, Congress’s outlook favours maintaining these IoT incentives to sustain connectivity expansion.

Discussions are also underway about expanding the regulatory scope of Brazil’s telecom watchdog, Anatel, to cover additional digital infrastructure elements such as DNS services, internet exchange points, content delivery networks, and cloud platforms. That potential expansion would require amendments to Brazil’s internet civil rights and telecommunications frameworks, reflecting evolving priorities in managing the country’s digital infrastructure and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!