Microsoft and SABC Plus drives digital skills access in South Africa

Millions of South Africans are set to gain access to AI and digital skills through a partnership between Microsoft South Africa and the national broadcaster SABC Plus. The initiative will deliver online courses, assessments, and recognised credentials directly to learners’ devices.

Building on Microsoft Elevate and the AI Skills Initiative, the programme follows the training of 1.4 million people and the credentialing of nearly half a million citizens since 2025. SABC Plus, with over 1.9 million registered users, provides an ideal platform to reach diverse communities nationwide.

AI and data skills are increasingly critical for employability, with global demand for AI roles growing rapidly. Microsoft and SABC aim to equip citizens with practical, future-ready capabilities, ensuring learning opportunities are not limited by geography or background.

The collaboration also complements Microsoft’s broader initiatives in South Africa, including Ikamva Digital, ElevateHer, Civic AI, and youth certification programmes, all designed to foster inclusion and prepare the next generation for a digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI streamlines data analysis with in-house AI agent

OpenAI has developed an internal AI data agent designed to help employees move from complex questions to reliable insights in minutes. The tool allows teams to analyse vast datasets using natural language instead of manual SQL-heavy workflows.

Across engineering, finance, research and product teams, the agent reduces friction by locating the right tables, running queries and validating results automatically. Built on GPT-5.2, it adapts as it works, correcting errors and refining its approach without constant human input.

Context plays a central role in the system’s accuracy, combining metadata, human annotations, code-level insights and institutional knowledge. A built-in memory function stores non-obvious corrections, helping the agent improve over time and avoid repeated mistakes.

To maintain trust, OpenAI evaluates the agent continuously using automated tests that compare generated results with verified benchmarks. Strong access controls and transparent reasoning ensure the system remains secure, reliable and aligned with existing data permissions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes customer experience, survey finds

A survey of contact centre and customer experience (CX) leaders finds that AI has become ‘non-negotiable’ for organisations seeking to deliver efficient, personalised, and data-driven customer service.

Respondents reported widespread use of AI-enabled tools such as chatbots, virtual agents, and conversational analytics to handle routine queries, triage requests and surface insights from large volumes of interaction data.

CX leaders emphasised AI’s ability to boost service quality and reduce operational costs, enabling faster response times and better outcomes across channels.

Many organisations are investing in AI platforms that integrate with existing systems to automate workflows, assist human agents, and personalise interactions based on real-time customer context.

Despite optimism, leaders also noted challenges, including data quality, governance, skills gaps and maintaining human oversight, and stressed that AI should augment, not replace, human agents.

The article underscores that today’s competitive CX landscape increasingly depends on strategic AI adoption rather than optional experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts propose frameworks for trustworthy AI systems

A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.

The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.

The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.

Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.

The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI investment gathers pace as Armenia seeks regional influence

Armenia is stepping up efforts to develop its AI sector, positioning itself as a potential regional hub for innovation. The government has announced plans to build a large-scale AI data centre backed by a $500 million investment, with operations expected to begin in 2026.

Officials say the project could support start-ups, research and education, while strengthening links between science and industry.

The initiative is being developed through a partnership involving the Armenian government, US chipmaker Nvidia, cloud company Firebird.ai and Team Group. The United States has already approved export licences for advanced chips, a move experts describe as strategically significant given global competition for semiconductor supply.

Armenian officials argue the project signals the country’s intention to participate actively in the global AI economy rather than remain on the sidelines.

Despite growing international attention, including recognition of Armenia’s technology leadership in global rankings, experts warn that the country lacks a clear and unified AI strategy. AI is already being used in areas such as agriculture mapping, tax risk analysis and social services, but deployment remains fragmented and transparency limited. Ongoing reforms and a shift towards cloud-based systems add further uncertainty.

Security specialists caution that without strong governance, expertise and long-term planning, AI investments could expose the public sector to cyber risks and poor decision-making. Armenia’s challenge, they argue, lies in moving quickly enough to seize emerging opportunities while ensuring that AI adoption strengthens, rather than undermines, institutional capacity and human judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot