Energy Infrastructure faces critical challenges in Africa’s digital future

Energy infrastructure is becoming a key foundation for Africa’s digital transformation. The rapid expansion of AI, cloud computing, and digital services is increasing electricity demand. Reliable and scalable power systems are therefore essential to support the growth of the continent’s digital economy.

Governments are integrating digital development into national policy strategies. Initiatives such as the New Deal Technologique Horizon 2034 in Senegal and Digital Ethiopia 2030 in Ethiopia prioritise digital infrastructure, data centres, and cloud services. However, these strategies require stronger alignment with energy planning.

Energy systems need modernisation to support data centres and AI infrastructure. Traditional power models are not designed for the high and rapidly growing energy demands of digital technologies. Expanding renewable energy, storage systems, and smart energy management can improve reliability and efficiency.

Data centres are increasingly viewed as strategic infrastructure. As central hubs of the digital economy, they require stable electricity supply, efficient cooling systems, and resilient energy management to support computing services and digital platforms.

Modular and energy-efficient infrastructure can accelerate digital deployment. Scalable power systems, modular data centres, and advanced energy storage can reduce deployment time and operational costs while supporting expanding digital services.

Collaboration across sectors is necessary to support sustainable digital growth. Governments, utilities, enterprises, and technology providers need to coordinate policies and investments to align digital transformation with energy transition efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gigabyte pushes accessible AI computing strategy at Mobile World Congress

Taiwanese computer manufacturer Gigabyte is expanding its AI strategy, focusing on making AI computing more widely accessible. Speaking at the Mobile World Congress in Barcelona, Gigabyte outlined its vision of ‘democratising AI’ by delivering infrastructure that ranges from data centre systems to tools that allow individuals to build and run AI models at home.

‘We believe that AI will be good for everyone when it’s more accessible to more people,’ said Jack Chou, brand marketing specialist at Gigabyte Technology.

Founded in 1986, the company initially built its reputation as one of the world’s leading motherboard manufacturers. The company has since expanded into full-stack AI infrastructure, telecom networking systems, and specialised AI supercomputers.

According to Chou, the company’s strategy reflects a shift from traditional consumer computing toward broader empowerment through AI. ‘In the past, we provided computing solutions for end users that might be used more for entertainment and gaming, but now we believe we’re empowering more people with AI computing,’ he said.

Gigabyte is also exploring physical AI systems, including robots for tasks such as automated assembly line monitoring and quality control in manufacturing environments. These systems rely on AI models trained in data centres and deployed through embedded industrial computing platforms that allow machines to interact with real-world environments.

As demand for AI infrastructure grows, Gigabyte is prioritising sustainability by investing in energy-saving cooling technologies such as direct liquid and immersion cooling for its data centres.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sustainable AI discussed by UNESCO and Saudi leaders under Vision 2030

Leaders from government, academia, and industry gathered to emphasise that sustainable AI must shape efficient, inclusive, and environmentally responsible systems. The discussion focused on embedding sustainability, ethics, and human-centred principles throughout the AI lifecycle by adopting a sustainable-by-design approach.

The workshop was built on Saudi Arabia’s expanding role in AI and digital transformation through the Saudi Data & AI Authority (SDAIA) and the National Strategy for Data and AI (NSDAI). The efforts are supported by significant investments in cloud infrastructure and data centres under the Kingdom’s Vision 2030 programme. Participants highlighted that sustainable AI must become a core principle in the development of emerging digital infrastructure and AI-powered services.

Abdulrahman Habib, Director of the International Centre for Artificial Intelligence Research and Ethics (ICAIRE), highlighted Saudi Arabia’s growing leadership in AI ethics and governance. With national AI Ethics Principles and a maturing regulatory landscape, the Kingdom is positioning itself as a global contributor to responsible AI dialogue, translating principles into operational governance systems rather than just policy statements.

Leona Verdadero of UNESCO highlighted two core concepts: Greening with AI, which uses AI to accelerate sustainability, and Greening of AI, which ensures systems are energy-efficient, ethical, and human-centred. She stressed that effective AI governance requires collaboration and industry leadership at every stage of development.

Per Ola Kristensson from the University of Cambridge urged action beyond rhetoric, stressing that true AI sustainability means developing technology to augment, not replace, human potential. Industry presentations reinforced that sustainable AI drives real-world progress. Companies like RECYCLEE optimise resource recovery, Remedium reduces environmental impacts in healthcare and infrastructure, and IDOM strengthens sustainability reporting through AI-enhanced design.

UNESCO supports Saudi Arabia’s drive for inclusive, ethical, and sustainable AI ecosystems, framing sustainable AI as critical in the global transition to green digital transformation.

Faisal Al Azib, Executive Director of the UN Global Compact Network Saudi Arabia, stated: ‘As the Kingdom advances its digital transformation under Vision 2030, we have a responsibility to ensure that innovation advances hand in hand with sustainability and human dignity.’

Al Azib concluded: ‘Sustainable AI is central to building resilient, future-ready businesses. Through partnerships with UNESCO and our local ecosystem, we aim to equip companies with the governance tools to embed responsible, energy-efficient, and human-centred AI into their core strategies.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Moltbook founders join Meta’s AI research lab

Meta Platforms has acquired Moltbook, a social networking platform designed for AI agents. The deal brings co-founders Matt Schlicht and Ben Parr into Meta’s AI research division, the Superintelligence Labs, led by Alexandr Wang.

Financial terms of the acquisition were not disclosed, and the founders are expected to start on 16 March.

Moltbook, launched in January, allows AI-powered bots to exchange code and interact socially in a Reddit-like environment. The platform has sparked debate on AI autonomy and real-world capabilities, highlighting growing competition among tech giants for AI talent and technology.

Industry figures have offered differing views on the platform’s significance. OpenAI CEO Sam Altman called Moltbook a potential fad but acknowledged its underlying technology hints at the future of AI agents.

Meanwhile, Anthropic’s chief product officer, Mike Krieger, noted that most users are not ready to grant AI full autonomy over their systems.

The platform’s growth also highlighted security risks. Cybersecurity firm Wiz reported a vulnerability that exposed private messages, email addresses, and credentials, which was resolved after the owners were notified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and genetics reveal how language develops in the brain

Recent research shows that language emerges from a dynamic, adaptable system in the brain rather than a single region. AI, high-field MRI, and genetic studies are helping scientists understand how humans acquire and process language.

Large language models can predict speech processing in children as young as two, while MRI shows language dominance exists on a fluid brain continuum. Genetic analyses show hundreds of genes contribute to language, with overlaps between musical rhythm and dyslexia.

High-level language skills, such as grammar, continue to mature between ages two and ten, while phonetic processing stabilises earlier. Combining AI, imaging, and genetics allows researchers to understand individual differences and neurovariability in communication.

The integrated approach could improve early diagnosis and treatment for language disorders, offering insights into how the brain learns, adapts, and uses language across the lifespan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google adds option to disable AI search in Google Photos

Users of Google Photos will now have greater control over how they search their images, after Google introduced a visible toggle that returns to the traditional search experience.

The update follows complaints about the AI-powered Ask Photos feature.

Ask Photos was designed to allow users to search for images using natural language queries rather than simple keywords. The tool aimed to make photo searches more flexible, enabling complex queries such as descriptions of people, events or locations captured in images.

However, some users reported that the AI system produced slower results and occasionally failed to locate images that the classic search had previously found more reliably.

Although an option to turn off the AI feature already existed, it was hidden within settings and often overlooked.

The new update introduces a visible switch directly on the search interface. Users can now easily alternate between the AI-powered search and the traditional search system depending on their preferences.

Google said improvements have also been made to the quality of common searches following user feedback. The company emphasised that search remains one of the most frequently used functions within Google Photos and that ongoing updates will continue to refine the experience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan expands strategic investment in AI, quantum computing, and drones

Japan has identified dozens of advanced technologies as priority investment targets as part of an economic strategy led by Sanae Takaichi.

The plan aims to channel public and private capital into industries expected to drive long-term economic growth.

Government officials selected 61 technologies and products for support across 17 strategic sectors. The list includes emerging fields such as AI, quantum computing, regenerative medicine and marine drones.

Many of these technologies are still in early development, but are considered important for economic security and global competitiveness.

The strategy forms a central pillar of Takaichi’s broader economic agenda to strengthen Japan’s industrial base and encourage investment in high-growth sectors. Authorities plan to release spending estimates and implementation timelines by summer as part of a detailed investment roadmap.

Japan has also set ambitious market goals in several sectors. Officials aim to secure more than 30% of the global AI robotics market by 2040 while increasing annual sales of domestically produced semiconductors to ¥40 trillion.

Several Japanese technology companies could benefit from the policy direction. Firms such as Fanuc, Yaskawa Electric and Mitsubishi Electric are integrating AI into industrial robots, while Sony Group produces sensors used in robotic systems.

Chipmakers, including Rohm, Kioxia and Renesas Electronics, may also benefit from increased investment in semiconductor manufacturing and related supply chains.

Despite strong investor interest, analysts note uncertainty about how the programme will be financed, particularly as Japan faces rising spending pressures from social security, defence and public debt.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!