Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US approaches universal 5G as global adoption surges

New data from Omdia and 5G Americas showed rapid global growth in wireless connectivity during the third quarter of 2025, with nearly three billion 5G connections worldwide.

North America remained the most advanced region in terms of adoption, reaching penetration levels that almost match its population.

The US alone recorded 341 million 5G connections, marking one of the highest per capita adoption rates in the world, compared to the global average, which remains far lower.

Analysts noted that strong device availability and sustained investment continue to reinforce the region’s leadership. Enhanced features such as improved uplink performance and integrated sensing are expected to accelerate the shift towards early 5G-Advanced capabilities.

Growth in cellular IoT also remained robust. North America supported more than 270 million connected devices and is forecast to reach nearly half a billion by 2030 as sectors such as manufacturing and utilities expand their use of connected systems.

AI is becoming central to these deployments by managing traffic, automating operations and enabling more innovative industrial applications.

Future adoption is set to intensify as regional 5G connections are projected to surpass 8.6 billion by 2030.

Rising interest in fixed wireless access is driving multi-device usage, offering high-speed connectivity for households and small firms instead of relying solely on fibre networks that remain patchy in many areas.

Globally, the sector has reached more than 78 million connections, with strong annual growth. Analysts believe that expanding infrastructure will support demand for low-latency connectivity, and the addition of satellite-based systems is expected to extend coverage to remote locations.

By mid-November 2025, operators had launched 379 commercial 5G networks worldwide, including seventeen in North America. A similar number of LTE networks operated across the region.

Industry observers said that expanding terrestrial and non-terrestrial networks will form a layered architecture that strengthens resilience, supports emergency response and improves service continuity across land, sea and air.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India moves toward mandatory AI royalty regime

India is weighing a sweeping copyright framework that would require AI companies to pay royalties for training on copyrighted works under a mandatory blanket licence branded as the hybrid ‘One Nation, One Licence, One Payment’ model.

A new Copyright Royalties Collective for AI Training, or CRCAT, would collect payments from developers and distribute money to creators. AI firms would have to rely only on lawfully accessed material and file detailed summaries of training datasets, including data types and sources.

The panel is expected to favour flat, revenue-linked percentages on global earnings from commercial AI systems, reviewed roughly every three years and open to legal challenge in court.

Obligations would apply retroactively to AI developers that have already trained profitable models on copyright-protected material, framed by Indian policymakers as a corrective measure for the creative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India expands job access with AI-powered worker platforms

India is reshaping support for its vast informal workforce through e-Shram, a national database built to connect millions of people to social security and better job prospects.

The database works together with the National Career Service portal, and both systems run on Microsoft Azure.

AI tools are now improving access to stable employment by offering skills analysis, resume generation and personalised career pathways.

The original aim of e-Shram was to create a reliable record of informal workers after the pandemic exposed major gaps in welfare coverage. Engineers had to build a platform capable of registering hundreds of millions of people while safeguarding sensitive data.

Azure’s scalable infrastructure allowed the system to process high transaction volumes and maintain strong security protocols. Support reached remote areas through a network of service centres, helped further by Bhashini, an AI language service offering real-time translation in 22 Indian languages.

More than 310 million workers are now registered and linked to programmes providing accident insurance, medical subsidies and housing assistance. The integration with NCS has opened paths to regulated work, often with health insurance or retirement savings.

Workers receive guidance on improving employability, while new features such as AI chatbots and location-focused job searches aim to help those in smaller cities gain equal access to opportunities.

India is using the combined platforms to plan future labour policies, manage skill development and support international mobility for trained workers.

Officials also hope the digital systems will reduce reliance on job brokers and strengthen safe recruitment, including abroad through links with the eMigrate portal.

The government has already presented the platforms to international partners and is preparing to offer them as digital public infrastructure for other countries seeking similar reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe brings its leading creative tools straight into ChatGPT

Yesterday, Adobe opened a new chapter for digital creativity by introducing Photoshop, Adobe Express and Adobe Acrobat inside ChatGPT.

The integration gives 800 million weekly users direct access to trusted creative and productivity tools through a conversational interface. Adobe aims to make creative work easier for newcomers by linking its technology to simple written instructions.

Photoshop inside ChatGPT offers selective edits, tone adjustments and creative effects, while Adobe Express brings quick design templates and animation features to people who want polished content without switching between applications.

Acrobat adds powerful document controls, allowing users to organise, edit or redact PDFs inside the chat. Each action blends conversation with Adobe’s familiar toolsets, giving users either simple text-driven commands or fine control through intuitive sliders.

The launch reflects Adobe’s broader investment in agentic AI and its Model Context Protocol. Earlier releases such as Acrobat Studio and AI Assistants for Photoshop and Adobe Express signalled Adobe’s ambition to expand conversational creative experiences.

Adobe also plans to extend an upcoming Firefly AI Assistant across multiple apps to support faster movement from an idea to a finished design.

All three apps are now available to ChatGPT users on desktop, web and iOS, with Android support expanding soon. Adobe positions the integration as an entry point for new audiences who may later move into the full desktop versions for deeper control.

The company expects the partnership to widen access to creative expression by letting anyone edit images, produce designs or transform documents simply by describing what they want to achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK partners with DeepMind to boost AI innovation

The UK Department for Science, Innovation and Technology (DSIT) has entered a strategic partnership with Google DeepMind to advance AI across public services, research, and security.

The non-legally binding memorandum of understanding outlines a shared commitment to responsible AI development, while enhancing national readiness for transformative technologies.

The collaboration will explore AI solutions for public services, including education, government departments, and the Incubator for AI (i.AI). Google DeepMind may provide engineering support and develop AI tools, including a government-focused version of Gemini aligned with the national curriculum.

Researchers will gain priority access to DeepMind’s AI models, including AlphaEvolve, AlphaGenome, and WeatherNext, with joint initiatives supporting automated R&D and lab facilities in the UK. The partnership seeks to accelerate innovation in strategically important areas such as fusion energy.

AI security will be strengthened through the UK AI Security Institute, which will share model insights, address emerging risks, and enhance national cyber preparedness. The MoU is voluntary, spans 36 months, and ensures compliance with data privacy laws, including UK GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot