Nigeria sets sights on top 50 AI-ready nations

Nigeria has pledged to become one of the top 50 AI-ready nations, according to presidential adviser Hadiza Usman. Speaking in Abuja at a colloquium on AI policy, she said the country needs strong leadership, investment, and partnerships to meet its goals.

She stressed that policies must address Nigeria’s unique challenges and not simply replicate foreign models. The government will offer collaboration opportunities with local institutions and international partners.

The Nigerian Deposit Insurance Commission reinforced its support, noting that technology should secure depositors without restricting innovators.

Private sector voices said AI could transform healthcare, agriculture, and public services if policies are designed with inclusion and trust in mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

3D-printed ion traps could accelerate quantum computer scaling

Quantum computers may soon grow more powerful through 3D printing, with researchers building miniaturised ion traps to improve scalability and performance.

Ion traps, which confine ions and control their quantum states, play a central role in ion-based qubits. Researchers at UC Berkeley created 3D-printed traps just a few hundred microns wide, which captured ions up to ten times more efficiently than conventional versions.

The new traps also reduced waiting times, allowing ions to be usable more quickly once the system is activated. Hartmut Häffner, who led the study, said the approach could enable scaling to far larger numbers of qubits while boosting speed.

3D printing offers flexibility not possible with chip-style manufacturing, allowing for more complex shapes and designs. Team members say they are already working on new iterations, with future versions expected to integrate optical components such as miniaturised lasers.

Experts argue that this method could address the challenges of low yield, high costs, and poor reproducibility in current ion-trap manufacturing, paving the way for scalable quantum computing and applications in other fields, including mass spectrometry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ITU warns global Internet access by 2030 could cost nearly USD 2.8 trillion

Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.

The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.

ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.

The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.

The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud study shows AI agents driving global business growth

A new Google Cloud study indicates that more than half of global enterprises are already using AI agents, with many reporting consistent revenue growth and faster return on investment.

The research, based on a survey of 3,466 executives across 24 countries, suggests agentic AI is moving from trial projects to large-scale deployment.

The findings by Google Cloud reveal that 52% of executives said their organisations actively use AI agents, while 39% reported launching more than ten. A group of early adopters, representing 13% of respondents, have gone further by dedicating at least half of their future AI budgets to agentic AI.

These companies are embedding agents across operations and are more likely to report returns in customer service, marketing, cybersecurity and software development.

The report also highlights how industries are tailoring adoption. Financial services focus on fraud detection, retail uses agents for quality control, and telecom operators apply them for network automation.

Regional variations are notable: European companies prioritise tech support, Latin American firms lean on marketing, while Asia-Pacific enterprises emphasise customer service.

Although enthusiasm is strong, challenges remain. Executives cited data privacy, security and integration with existing systems as key concerns.

Google Cloud executives said that early adopters are not only automating tasks but also reshaping business processes, with 2025 expected to mark a shift towards embedding AI intelligence directly into operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fintech CISO says AI is reshaping cybersecurity skills

Financial services firms are adapting rapidly to the rise of AI in cybersecurity, according to David Ramirez, CISO at Broadridge. He said AI is changing the balance between attackers and defenders while also reshaping the skills security teams require.

On the defensive side, AI is already streamlining governance, risk management and compliance tasks, while also speeding up incident detection and training. He highlighted its growing role in areas like access management and data loss prevention.

He also stressed the importance of aligning cyber strategy with business goals and improving board-level visibility. While AI tools are advancing quickly, he urged CISOs not to lose sight of risk assessments and fundamentals in building resilient systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI set to spend $10bn on Broadcom AI chips

OpenAI has reportedly placed a $10bn order with Broadcom to mass-produce custom AI chips, due for shipment in 2026. Sources told the Financial Times that the move would help reduce OpenAI’s dependence on Nvidia, its primary supplier.

Sam Altman recently said OpenAI will use ‘well over 1m GPUs’ by the end of 2025, highlighting the company’s accelerating demand for computing power. In contrast, Elon Musk’s xAI is expected to double its Nvidia Hopper GPUs to around 200,000.

Broadcom confirmed a large custom chip order during its latest earnings call, without naming the buyer. The company’s AI revenue rose 63 percent to $5.2bn, chip sales grew 57 percent to $9.1bn, and shares gained nearly 5 percent.

The new order is expected to be for internal use rather than external customers. Industry observers suggest that OpenAI’s decision signals a strategic shift, allowing the ChatGPT maker to secure supply for its AI expansion while diversifying beyond Nvidia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood’s Warner Bros. Discovery challenge an AI firm over copyright claims

Warner Bros. Discovery has filed a lawsuit against AI company Midjourney, accusing it of large-scale infringement of its intellectual property. The move follows similar actions by Disney and Universal, signalling growing pressure from major studios on AI image and video generators.

The filing includes examples of Midjourney-produced images featuring DC Comics, Looney Tunes and Rick and Morty characters. Warner Bros. Discovery argues that such output undermines its business model, which relies heavily on licensed images and merchandise.

The studio also claims Midjourney profits from copyright-protected works through its subscription services and the ‘Midjourney TV’ platform.

A central question in the case is whether AI-generated material reproducing copyrighted characters constitutes infringement under US law. The courts have not decided on this issue, making the outcome uncertain.

Warner Bros. Discovery is also challenging how Midjourney trains its models, pointing to past statements from company executives suggesting vast quantities of material were indiscriminately collected to build its systems.

With three major Hollywood studios now pursuing lawsuits, the outcome of these cases could establish a precedent for how courts treat AI-generated content.

Warner Bros. Discovery seeks damages that could reach $150,000 per infringed work, or Midjourney’s profits linked to the alleged violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EASA survey reveals cautious optimism over aviation AI ethics

The European Union Aviation Safety Agency (EASA) has published survey results probing the ethical outlook of aviation professionals on AI deployment, released during its AI Days event in Cologne.

The AI Days conference gathered nearly 200 on-site attendees from across the globe, with even more participating online.

The survey measured acceptance, trust and comfort across eight hypothetical AI use cases, yielding an average acceptance score of 4.4 out of 7. Despite growing interest, two-thirds of respondents declined at least one scenario.

Their key concerns included limitations of AI performance, privacy and data protection, accountability, safety risks and the potential for workforce de-skilling. A clear majority called for stronger regulation and oversight by EASA and national authorities.

In a keynote address, Christine Berg from the European Commission highlighted that AI in aviation is already practical, optimising air traffic flow and predictive maintenance, while emphasising the need for explainable, reliable and certifiable systems under the EU AI Act.

Survey findings will feed into EASA’s AI Roadmap and prompt public consultations as the agency advances policy and regulatory frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia publishes guide to spot AI-generated entries

Wikipedia editors have published a guide titled ‘Signs of AI Writing’ to support readers and contributors in detecting AI-generated content across the encyclopedia.

The field guide distils key linguistic and formatting traits commonly found in AI output, such as overblown symbolism, promotional tone, repetitive transitions, rule-of-three phrasing and editorial commentary that breaks Wikipedia’s standards.

The initiative stems from the community’s ongoing challenge against AI-generated content, which has grown enough to warrant the creation of a dedicated project named WikiProject AI Cleanup.

Volunteers have developed tools like speedy deletion policies to quickly remove suspicious entries and tagged over 500 articles for review.

While the guide aims to strengthen detection, editors caution that it should not be treated as a shortcut but should complement human judgement, oversight, and trusted community processes. Such layered scrutiny helps preserve Wikipedia’s reputation for reliability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!