Fragmented rules complicate South Africa green tech transfer

South Africa is betting on green technology to drive development while cutting emissions. Overlapping laws and strategies create a complex, sometimes conflicting environment for investors and innovators. Analysts warn that fragmentation slows both climate action and the just transition.

Flagship measures, such as the Climate Change Act and the Just Energy Transition Investment Plan, anchor long-term goals. The government aims to mobilise around R1.5 trillion, including an initial R8.5 billion in catalytic finance.

Funding targets power generation, new energy vehicles and green hydrogen, with private capital expected to follow. Renewable Energy Independent Power Producer projects showcase successful public-private partnerships that attracted significant foreign and domestic investment.

Localisation rules, special economic zones and tariff tweaks seek to build manufacturing capacity and transfer skills. Critics argue that strict content quotas and data localisation can delay projects and deter prospective investors.

Observers say harmonised policies, clearer incentives and stronger coordination across sectors are essential for effective green technology transfer. Greater collaboration between the South African government, businesses, and universities could translate promising pilots into climate-resilient industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online platforms face new EU duties on child protection

The EU member states have endorsed a position for new rules to counter child sexual abuse online. The plan introduces duties for digital services to prevent the spread of abusive material. It also creates an EU Centre to coordinate enforcement and support national authorities.

Service providers must assess how their platforms could be misused and apply mitigation measures. These may include reporting tools, stronger privacy defaults for minors, and controls over shared content. National authorities will review these steps and can order additional action where needed.

A three-tier risk system will categorise services as high, medium, or low risk. High-risk platforms may be required to help develop protective technologies. Providers that fail to comply with obligations could face financial penalties under the regulation.

Victims will be able to request the removal or disabling of abusive material depicting them. The EU Centre will verify provider responses and maintain a database to manage reports. It will also share relevant information with Europol and law enforcement bodies.

The Council supports extending voluntary scanning for abusive content beyond its current expiry. Negotiations with the European Parliament will now begin on the final text. The Parliament adopted its position in 2023 and will help decide the Centre’s location.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE strengthens digital transformation with Sharjah’s new integration committee

Sharjah is advancing its digital transformation efforts following the issuance of a new decree that established the Higher Committee for Digital Integration. The Crown Prince formed the body to strengthen oversight and guide government entities as the emirate seeks more coordinated progress.

The committee will report directly to the Executive Council and will be led by Sheikh Saud bin Sultan Al Qasimi from the Sharjah Digital Department.

Senior officials from several departments in the UAE will join him to enhance cooperation across the government, rather than leaving agencies to pursue separate digital plans.

Their combined expertise is expected to support stronger governance and reduce risks linked to large-scale transformation.

Its mandate covers strategic oversight, approval of key policies, alignment with national objectives and careful monitoring of digital projects.

The members will intervene when challenges arise, oversee investments and help resolve disputes so the emirate can maintain momentum instead of facing delays caused by fragmented decision-making.

Membership runs for two years, with the option of extension. The committee will continue its work until a successor group is formed and will provide regular reports on progress, challenges and proposed solutions to the Executive Council.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is accelerating the transition to clean energy

AI is playing an increasingly vital role in supporting the transition to clean energy. AI helps optimise power grid operations, plan infrastructure investments, and accelerate the discovery of novel materials for energy generation, storage, and conversion.

While energy-hungry data centres can increase electricity demand, AI applications are helping reduce energy consumption across buildings, transport, and industry.

On electric grids, AI algorithms enhance efficiency, integrate renewable energy sources, and predict maintenance needs to prevent power outages. Grid operators can utilise AI to forecast supply and demand, optimise energy storage, and manage resources in real-time.

Technologies such as smart thermostats, electric vehicle batteries, and AI-managed data centres provide additional flexibility to balance peak demand and supply.

AI also aids long-term planning by helping utilities forecast future infrastructure needs amid growing renewable deployment and climate-related risks. Additionally, AI accelerates the discovery of materials for energy technologies.

At MIT, researchers use AI-guided experiments and robotics to design and test new materials, significantly shortening development times from decades to years.

Through research, modelling, and collaboration, AI is being applied to fusion reactor management, solar cell optimisation, and energy-efficient data centre design. MIT Energy Initiative programmes unite academics, industry, and policymakers to harness AI for a resilient and sustainable energy future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New alliance between Samsung and SK Telecom accelerates 6G innovation

Samsung Electronics and SK Telecom have taken a significant step toward shaping next-generation connectivity after signing an agreement to develop essential 6G technologies.

Their partnership centres on AI-based radio access networks, with both companies aiming to secure an early lead as global competition intensifies.

Research teams from Samsung and SK Telecom will build and test key components, including AI-based channel estimation, distributed MIMO and AI-driven schedulers.

AI models will refine signals in real-time to improve accuracy, rather than relying on conventional estimation methods. Meanwhile, distributed MIMO will enable multiple antennas to cooperate for reliable, high-speed communication across diverse environments.

The companies believe that AI-enabled schedulers and core networks will manage data flows more efficiently as the number of devices continues to rise.

Their collaboration also extends into the AI-RAN Alliance, where a jointly proposed channel estimation technology has already been accepted as a formal work item, strengthening their shared role in shaping industry standards.

Samsung continues to promote 6G research through its Advanced Communications Research Centre, and recent demonstrations at major industry events highlight the growing momentum behind AI-RAN technology.

Both organisations expect their work to accelerate the transition toward a hyperconnected 6G future, rather than allowing competing ecosystems to dominate early development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!