Google and Cassava expand Gemini access in Africa

Google announced a partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-free Gemini usage for eligible users coordinated through Cassava’s network partners. The initiative aims to address affordability and adoption barriers for mobile users.

A six-month trial of the Google AI Plus plan is part of the package. Benefits include access to more capable Gemini models and added cloud storage. Coverage by regional tech outlets reported the exact core details.

Education features were highlighted, including NotebookLM for study aids and Gemini in Docs for writing support. Google said the offer aims to help students, teachers, and creators work without worrying about data usage. Reports highlight a focus on youth and skills development.

Cassava’s role aligns with broader investments in AI infrastructure and services across the continent; recent announcements reference model exchanges and planned AI facilities that support regional development. Observers see momentum behind accessible AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT-5 outperformed by a Chinese startup model

A Chinese company has stunned the AI world after its new open-source model outperformed OpenAI’s ChatGPT-5 and Anthropic’s Claude Sonnet 4.5 in key benchmarks.

Moonshot AI’s Kimi K2 Thinking model achieved the best reasoning and coding scores yet, shaking confidence in American dominance over advanced AI systems.

The Beijing-based startup, backed by Alibaba and Tencent, released Kimi K2 Thinking on 6 November. It scored 44.9 percent in Humanity’s Last Exam and 60.2 percent in BrowseComp, both surpassing leading US models.

Analysts dubbed it another ‘DeepSeek moment ‘, echoing the earlier success of China in breaking AI cost barriers.

Moonshot AI trained the trillion-parameter system for just US$4.6 million (nearly ten times cheaper than GPT-5’s reported costs) using a Mixture-of-Experts structure and advanced quantisation for faster generation.

The fully open-weight model, released under a Modified MIT License, adds commercial flexibility and intensifies competition with US labs.

Industry observers called it a turning point. Hugging Face’s Thomas Wolf said the achievement shows how open-source models can now rival closed systems.

Researchers from the Allen Institute for AI noted that Chinese innovation is narrowing the gap faster than expected, driven by efficiency and high-quality training data rather than raw computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MK1 joins AMD to accelerate enterprise AI and reasoning technologies

AMD has completed the acquisition of MK1, a California-based company specialising in high-speed inference and reasoning-based AI technologies.

The move marks a significant step in AMD’s strategy to strengthen AI performance and efficiency across hardware and software layers. MK1’s Flywheel and comprehension engines are designed to optimise AMD’s Instinct GPUs, offering scalable, accurate, and cost-efficient AI reasoning.

The MK1 team will join the AMD Artificial Intelligence Group, where their expertise will advance AMD’s enterprise AI software stack and inference capabilities.

Handling over one trillion tokens daily, MK1’s systems are already deployed at scale, providing traceable and efficient AI solutions for complex business processes.

By combining MK1’s advanced AI software innovation with AMD’s compute power, the acquisition enhances AMD’s position in the enterprise and generative AI markets, supporting its goal of delivering accessible, high-performance AI solutions globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Joint quantum partnership unites Canada and Denmark for global research leadership

Canada and Denmark have signed a joint statement to deepen collaboration in quantum research and innovation.

The agreement, announced at the European Quantum Technologies Conference 2025 in Copenhagen, reflects both countries’ commitment to advancing quantum science responsibly while promoting shared values of openness, ethics and excellence.

Under the partnership, the two nations will enhance research and development ties, encourage open data sharing, and cultivate a skilled talent pipeline. They also aim to boost global competitiveness in quantum technologies, fostering new opportunities for market expansion and secure supply chains.

Canadian Minister Mélanie Joly highlighted that the cooperation showcases a shared ambition to accelerate progress in health care, clean energy and defence.

Denmark’s Minister for Higher Education and Science, Christina Egelund, described Canada as a vital partner in scientific innovation. At the same time, Minister Evan Solomon stressed the agreement’s role in empowering researchers to deliver breakthroughs that shape the future of quantum technologies.

Both Canada and Denmark are recognised as global leaders in quantum science, working together through initiatives such as the NATO Transatlantic Quantum Community.

A partnership that supports Canada’s National Quantum Strategy, launched in 2023, and reinforces its shared goal of driving innovation for sustainable growth and collective security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Appfigures revises iOS estimates as Sora’s launch on Android launch leaps ahead

Sora’s Android launch outpaced its iOS debut, garnering an estimated 470,000 first-day installs across seven markets, according to Appfigures. Broader regional availability, plus the end of invite-only access in top markets, boosted uptake.

OpenAI’s iOS rollout was limited to the US and Canada via invitations, which capped early growth despite strong momentum. The iOS app nevertheless surpassed one million installs in its first week and still ranks highly in the US App Store’s Top Free chart.

Revised Appfigures modelling puts day-one iOS installs at ~110,000 (up from 56,000), with ~69,300 from the US. On Android, availability spans the US, Canada, Japan, South Korea, Taiwan, Thailand, and Vietnam. First-day US installs were ~296,000, showing sustained demand beyond the iOS launch.

Sora allows users to generate videos from text prompts and animate themselves or friends via ‘Cameos’, sharing the results in a TikTok-style vertical feed. Engagement features for creation and discovery are driving word of mouth and repeat use across both platforms.

Competition in mobile AI video and assistants is intensifying, with Meta AI expanding its app in Europe on the same day. Market share will hinge on geographic reach, feature velocity, creator tools, and distribution via app store charts and social feeds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ByteDance cuts use of Claude after Anthropic blocks China access

An escalating tech clash has emerged between ByteDance and Anthropic over AI access and service restrictions. ByteDance has halted use of Anthropic’s Claude model on its infrastructure after the US firm imposed access limitations for Chinese users.

The suspension follows Anthropic’s move to restrict China-linked deployments and aligns with broader geopolitical tensions in the AI sector. ByteDance reportedly said it would now rely on domestic alternatives, signalling a strategic pivot away from western-based AI models.

Industry watchers view the dispute as a marker of how major tech firms are navigating export controls, national security concerns and sovereignty in AI. Observers warn the rift may prompt accelerated investment in home-grown AI ecosystems by Chinese companies.

While neither company has detailed all operational impacts, the episode highlights AI’s fraught position at the intersection of technology and geopolitics. US market reaction may hinge on whether other firms follow suit or partnerships are redefined around regional access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian government highlights geopolitical risks to critical infrastructure

According to the federal government’s latest Critical Infrastructure Annual Risk Review, Australia’s critical infrastructure is increasingly vulnerable due to global geopolitical uncertainty, supply chain vulnerabilities, and advancements in technology.

The report, released by the Department of Home Affairs, states that geopolitical tensions and instability are affecting all sectors essential to national functioning, such as energy, healthcare, banking, aviation and the digital systems supporting them.

It notes that operational environments are becoming increasingly uncertain both domestically and internationally, requiring new approaches to risk management.

The review highlights a combination of pressures, including cyber threats, supply chain disruptions, climate-related risks and the potential for physical sabotage. It also points to challenges linked to “malicious insiders”, geostrategic shifts and declining public trust in institutions.

According to the report, Australia’s involvement in international policy discussions has, at times, exposed it to possible retaliation from foreign actors through activities ranging from grey zone operations to preparations for state-sponsored sabotage.

It further notes that the effects of overseas conflicts have influenced domestic sentiment and social cohesion, contributing to risks such as ideologically driven vandalism, politically motivated violence and lone-actor extremism.

To address these challenges, the government emphasises the need for adaptable risk management strategies that reflect shifting dependencies, short- and long-term supply chain issues and ongoing geopolitical tensions.

The report divides priority risks into two categories: those considered most plausible and those deemed most harmful. Among the most convincing are extreme-impact cyber incidents and geopolitically driven supply chain disruption.

The most damaging risks include disrupted fuel supplies, major cyber incidents and state-sponsored sabotage. The review notes that because critical sectors are increasingly interdependent, disruption in one area could have cascading impacts on others.

Australia currently imports 61 percent of its fuel from the Middle East, with shipments transiting maritime routes that are vulnerable to regional tensions. Many global shipping routes also pass through the Taiwan Strait, where conflict would significantly affect supply chains.

Home Affairs Minister Tony Burke said the review aims to increase understanding of the risks facing Australia’s essential services and inform efforts to enhance resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!