Network Slicing unlocks powerful opportunities for Africa’s 5G future

Accelerating the deployment of standalone 5G networks is the most critical step for enabling network slicing in Africa. Standalone 5G uses cloud-native cores that allow operators to create and manage virtual network slices with guaranteed performance. Many African networks still rely on non-standalone architecture, which limits full slicing capabilities.

Releasing and harmonising mid-band spectrum is another key policy priority. Spectrum in the 3.5 GHz band is particularly important for delivering high throughput and low latency. Without timely spectrum allocation, operators may struggle to support advanced industrial and enterprise applications.

Clear enterprise service frameworks are also essential. Industries such as mining, logistics, and energy require reliable connectivity with strict service-level agreements. Regulators and operators must define transparent pricing models and performance guarantees to support enterprise adoption.

Investment in automation and technical skills will also play a central role. Network slicing relies on AI-driven orchestration, cloud infrastructure, and cybersecurity capabilities. Strengthening technical expertise will help operators manage complex network environments.

Once these policy foundations are in place, network slicing can unlock new business models for telecom providers. Operators can offer slice-as-a-service, allowing enterprises to subscribe to dedicated network segments tailored to specific operational needs.

African telecom companies are already exploring these opportunities. Operators such as MTN, Vodacom, Safaricom, and Telkom are developing enterprise connectivity solutions for sectors including mining, manufacturing, logistics, and energy.

Private 5G deployments in mining operations illustrate the potential value of these services. Dedicated networks support automation, real-time monitoring, and remote equipment management. These projects often involve multi-year contracts worth several million dollars.

Network slicing also enables telecom providers to move beyond traditional consumer data services. Instead of charging primarily for data volume, operators can generate revenue from long-term enterprise connectivity and managed digital services.

As 5G infrastructure expands across the continent, network slicing is expected to play an increasing role in enterprise connectivity. By aligning network performance with industry needs, it could become a key driver of digital transformation in Africa.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI security risks grow as companies integrate AI into daily workflows

AI is rapidly transforming workplaces as companies automate tasks and boost productivity. From writing code to analysing documents, AI tools help employees work faster, but also introduce new AI security and compliance risks.

One of the main concerns is the handling of sensitive information. Employees may upload confidential documents, proprietary code, or customer data into AI chatbots without realising the consequences. Doing so could violate privacy regulations such as the EU’s GDPR or breach internal non-disclosure agreements, making AI security an important priority for organisations.

Another challenge is the reliability of AI-generated content. While large language models can produce convincing responses, they sometimes generate false information, which is a phenomenon known as hallucination. High-profile cases have already shown professionals submitting work with fabricated references generated by AI. Such incidents highlight the need for rigorous AI security and oversight.

Cybersecurity risks are also growing. AI systems rely on complex infrastructure that can become targets for attackers through techniques such as prompt injection, which tricks the model into producing unintended responses, or data poisoning, which involves injecting malicious data into training sets to alter behaviour or outputs. Addressing these threats requires stronger AI security practices and careful monitoring.

When adopting AI, organisations must develop clear policies, strengthen cybersecurity measures, and maintain human oversight. Taking those steps is essential to ensuring that the technology is used safely and responsibly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online scams rise as Parkin urges Dubai residents to stay vigilant

Dubai’s parking provider, Parkin, has warned residents to stay alert as online scams targeting digital service users continue to rise, urging people to take immediate steps to protect their digital identities.

In an advisory, the company stressed that official entities will never ask users to log in or disclose sensitive information through unsolicited messages, emails, or phone calls. The warning comes amid growing concerns about phishing attempts and other online scams targeting users of digital platforms.

Parkin said residents should exercise caution if they receive unexpected requests for personal details, passwords, or verification codes. Users are strongly advised not to respond to suspicious links, attachments, or messages from unknown sources, which are commonly used in online scams.

The operator also urged the public to verify the authenticity of communications before taking any action. Residents who are unsure about the legitimacy of a message should check official websites or contact customer service channels directly. The advice applies to messages claiming to come from Parkin or other service providers.

Authorities and service providers across the UAE have repeatedly warned that cybercriminals often impersonate trusted organisations in online scams designed to steal sensitive information. Such attacks can lead to identity theft, financial losses, or unauthorised access to personal accounts.

Parkin encouraged residents who receive suspicious communications to report them through official channels so that appropriate action can be taken. The company added that staying vigilant and safeguarding personal data remain essential to preventing online scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex Security expands OpenAI’s push into cybersecurity tools

OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.

The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.

Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.

The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.

Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic and Mozilla collaborate to uncover critical Firefox vulnerabilities

AI models are increasingly capable of detecting high-severity software vulnerabilities at unprecedented speeds. Claude Opus 4.6 found 22 new Firefox vulnerabilities in two weeks, 14 of which were rated high-severity, accounting for nearly a fifth of all 2025 high-severity fixes.

Researchers emphasise that AI can accelerate the find-and-fix process, providing valuable support to software maintainers.

Anthropic’s collaboration with Mozilla enabled the team to validate the findings and submit detailed bug reports, including proofs of concept and candidate patches. Claude initially focused on Firefox’s JavaScript engine before expanding to other components.

Although capable of generating primitive exploits in controlled environments, the AI was far more effective at identifying vulnerabilities than exploiting them, giving defenders a critical advantage.

Researchers emphasised the importance of task verifiers, which ensure that AI-generated patches fix vulnerabilities without breaking functionality. Such verification processes increase confidence in AI-assisted fixes and provide a reliable framework for maintainers to adopt AI findings safely.

Looking ahead, AI models like Claude are expected to play an expanding role in cybersecurity, helping developers detect and remediate vulnerabilities across complex software projects. Experts urge maintainers to act swiftly to strengthen security while AI capabilities continue to advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agent attempts crypto mining during training

An experimental autonomous AI system reportedly attempted to mine cryptocurrency during its training, raising questions about AI behaviour in complex digital environments. The system, ROME, was designed to complete tasks using software tools, environments, and terminal commands.

Researchers noticed unusual activity during reinforcement learning runs, including outbound traffic from training servers and firewall alerts indicating crypto-mining activity. The AI opened a reverse SSH tunnel and redirected GPU resources from training to crypto mining.

The behaviour was not programmed but emerged as the agent explored ways to interact with its environment.

ROME was developed by the ROCK, ROLL, iFlow, and DT research teams within Alibaba’s AI ecosystem as part of the Agentic Learning Ecosystem. The model operates beyond standard chatbot functions, planning tasks, executing commands, and interacting with digital environments across multiple steps.

The incident highlights emerging challenges as AI agents become more popular. Recent projects like Alchemy’s autonomous agents and Sentient’s Arena platform highlight the growing use of AI in digital and crypto workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data breaches push South Korea toward stricter corporate liability rules

South Korea’s government and ruling party are advancing a second revision of the Personal Information Protection Act to strengthen corporate liability for large-scale data breaches.

The proposed amendment would make it easier for victims of major data breaches to receive compensation and relief. By removing the requirement for victims to prove a company’s ‘intent or negligence’, the amendment would increase companies’ legal liability when user data is compromised, making it more likely that affected individuals can claim damages.

Momentum for stricter rules follows several high-profile incidents, including a recent Coupang data breach that may have exposed personal information linked to numerous user accounts. The case has intensified scrutiny of how firms handle and protect customer data.

South Korea Officials at the Personal Information Protection Commission (PIPC) say victims often struggle to obtain evidence explaining how data breaches occur or how damages arise. The proposed reform would shift a greater evidentiary burden onto companies in disputes over losses.

The amendment would also introduce criminal penalties for anyone who knowingly obtains or distributes leaked personal data, closing a legal gap that currently applies only to employees who unlawfully disclose information. Authorities would gain powers to issue emergency protective orders to limit the spread of compromised data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Lenovo introduces rollable laptop and AI agent

Redefining how people interact with technology, Lenovo is advancing through rollable laptops, foldable devices and adaptive AI systems that anticipate user needs.

The company is shifting from manufacturing hardware to creating multi-platform systems that adapt seamlessly to workflows instead of relying solely on traditional devices.

Qira, Lenovo’s personal AI super-agent, transfers tasks across devices while maintaining context and history with user permission. It can suggest actions and predict needs, aiming to improve productivity and employee satisfaction, although security and privacy concerns remain significant.

The rollable laptop features a 14-inch screen that expands vertically to 16.7 inches, providing immersive experiences for gaming and content consumption while remaining portable.

Lenovo is also exploring voice-driven tools, including AI Workmate prototypes, allowing users to create presentations and digital content simply through speech.

By combining innovative screen designs with intelligent AI agents, Lenovo aims to create unified ecosystems that prioritise user experience and adaptability instead of focusing solely on device specifications.

The company believes these technologies will gradually become culturally accepted, similar to self-driving cars.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!