Africa launches world’s largest tokenised economy with $5.5 billion

Global Settlement Network (GSN) and Diacente Group have partnered to establish Africa’s most advanced tokenised economy, valued at $5.5 billion in real-world infrastructure. The collaboration digitises assets across food production, minerals, renewable energy, and trade.

The initiative aims to create an inclusive, efficient economic system, leveraging blockchain to enhance emerging markets’ global participation.

Uganda leads with its first Central Bank Digital Currency (CBDC) pilot, deployed on GSN’s permissioned blockchain and backed by treasury bonds. Agro-processing hubs, mining operations, and solar plants underpin the tokenisation effort.

Fully compliant with KYC and AML regulations, the digital shilling enables over 40 million users to transact securely via smartphones and USSD, fostering financial inclusion across East Africa.

The project supports Uganda’s Vision 2040 and the African Union’s Agenda 2063, aligning with the goals of the African Continental Free Trade Area. Leaders project one million jobs and $10 billion in annual exports.

Ryan Kirkley, GSN co-founder, calls it a ‘programmable economy grounded in real assets,’ while Diacente’s Edgar Agaba emphasises attracting investment and empowering local industries through transparent, tech-driven systems.

The partnership sets a precedent for emerging markets, reducing reliance on intermediaries and unlocking global capital. Tokenisation integrated with national development drives sustainable growth, offering a scalable model for digital economies based on real infrastructure and regulatory collaboration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Machine learning helps prevent disruptions in fusion devices

Researchers at MIT have developed a predictive model that could make fusion power plants more reliable and safe. The approach uses machine learning and physics-based simulations to predict plasma instabilities and prevent damage during tokamak shutdowns.

Experimental tokamaks use strong magnets to contain plasma hotter than the sun’s core. They often face challenges in safely ramping down plasma currents that circulate at extreme speeds and temperatures.

The model was trained and tested on data from the Swiss TCV tokamak. Combining neural networks with physics simulations, the team achieved accurate predictions using few plasma pulses, saving costs and overcoming limited experimental data.

The system can now generate practical ‘trajectories’ for controllers to adjust magnets and temperatures, helping to safely manage plasma during shutdowns.

Researchers say the method could be particularly important as fusion devices scale up to grid-level energy production. High-energy plasmas in larger reactors pose greater risks, and uncontrolled terminations could damage the machine.

The new model allows operators to carefully balance rampdowns, avoiding disruptions and ensuring safer, more efficient operation.

Work on the predictive model is part of wider collaboration with Commonwealth Fusion Systems and supported by the EUROfusion Consortium and Swiss research institutions. Scientists see it as a crucial step toward making fusion a practical, reliable, and sustainable energy source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT AI reveals how antibiotic targets Crohn’s bacteria

MIT and McMaster researchers used AI to map how a narrow-spectrum antibiotic attacks harmful gut bacteria. Enterololin targets E. coli linked to Crohn’s flares while preserving most of the microbiome, providing a precise alternative to broad-spectrum antibiotics.

AI accelerated the process of identifying the drug’s mechanism of action, reducing a task that usually takes years to just months.

The team used DiffDock, a generative AI tool developed at MIT, to predict how enterololin binds to a protein complex called LolCDE in E. coli. Laboratory experiments, including mutant evolution, RNA sequencing, and CRISPR knockdowns, confirmed the AI predictions.

The method demonstrates how AI can provide mechanistic insights, guide experiments, and speed up early-stage antibiotic development.

Enterololin improved recovery and preserved the microbiome in mouse models compared with conventional treatments. Researchers aim to develop derivatives against resistant pathogens like Klebsiella pneumoniae, with early work underway at spinout company Stoked Bio.

The study highlights broader implications for precision antibiotics, which could treat infections without disrupting beneficial microbes. AI-driven mechanism mapping could speed up drug discovery, cut costs, and help tackle antimicrobial resistance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google expands Gemini-powered AI Search across the globe

Google has expanded its AI Mode in Search, supporting over 35 new languages and 40 more countries and territories. The rollout expands access across Europe and other regions, reaching over 200 countries and territories worldwide.

The update aims to make AI-powered Search more accessible globally, allowing people to interact with Search in their native language. Expanding language support, Google will enable users to ask questions and access information in their preferred language.

AI Mode is powered by Google’s latest Gemini models, which deliver advanced reasoning and multimodal understanding. These capabilities help the system grasp the subtleties of local languages and provide relevant, context-aware answers, making AI Mode genuinely useful across diverse regions.

According to Google, people using AI Mode tend to explore topics in far greater depth, with queries nearly three times longer than traditional searches. The enhanced experience will continue to roll out globally over the coming week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study shows how AI can uncover hidden biological mechanisms

Researchers in China have used AI to reveal how different species independently develop similar traits when adapting to shared environments. The study focuses on echolocation in bats and toothed whales, two distant groups that created this ability separately despite their evolutionary differences.

The Institute of Zoology, Chinese Academy of Sciences team found that high-order protein features are crucial to adaptive convergence. Convergent evolution is the independent emergence of similar traits across species, often under similar ecological pressures.

Led by Zou Zhengting, the researchers developed a framework called ACEP, which utilises a pre-trained protein language model to analyse amino acid sequences. This method reveals hidden structural and functional information in proteins, shedding light on how traits are formed at the molecular level.

The findings, published in the Proceedings of the National Academy of Sciences, reveal how AI can detect deep biological patterns behind convergent evolution. The study demonstrates how combining AI with protein analysis provides powerful tools for understanding complex evolutionary mechanisms.

Zou said the work deepens the understanding of life’s evolutionary laws and highlights the growing role of AI in biology. The team in China hopes this approach can be applied to other evolutionary questions, broadening the use of AI in decoding life’s hidden patterns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study explores AI’s role in future-proofing buildings

AI could help design buildings that are resilient to both climate extremes and infectious disease threats, according to new research. The study, conducted in collaboration with Charles Darwin University, examines the application of AI in smart buildings, with a focus on energy efficiency and management.

Buildings account for over two-thirds of global carbon emissions and energy consumption, but reducing consumption remains challenging and costly. The study highlights how AI can enhance ventilation and thermal comfort, overcoming the limitations of static HVAC systems that impact sustainability and health.

Researchers propose adaptive thermal control systems that respond in real-time to occupancy, outdoor conditions, and internal heat. Machine learning can optimise temperature and airflow to balance comfort, energy efficiency, and infection control.

A new framework enables designers and facility managers to simulate thermal scenarios and assess their impact on the risk of airborne transmission. It is modular and adaptable to different building types, offering a quantitative basis for future regulatory standards.

The study was conducted with lead author Mohammadreza Haghighat from the University of Tehran and CDU’s Ehsan Mohammadi Savadkoohi. Future work will integrate real-time sensor data to strengthen building resilience against future climate and health threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spotify links with ChatGPT to enhance personalised listening experiences

Spotify and OpenAI have combined music and podcast discovery into ChatGPT conversations. Free and Premium users can now link their Spotify accounts to ChatGPT and receive personalised recommendations directly within chat.

Once connected, users can prompt ChatGPT with queries like ‘play something mellow for reading’ or ‘recommend a science podcast’, and Spotify will surface results inline. Tapping a track or episode directs the user to the Spotify app for playback.

Spotify emphasises that this feature is optional and user consent is required. No audio or video content from Spotify will be shared with OpenAI for model training purposes.

Free users will still draw from Spotify’s existing playlists (such as Discover Weekly or New Music Friday). In contrast, Premium users will gain access to more refined, bespoke suggestions based on richer prompts.

The integration is available in English across 145 countries and works on desktop and mobile for ChatGPT Free, Plus and Pro users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!