Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.
The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.
The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.
Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.
The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.
Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.
Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.
An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.
The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.
By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.
HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.
An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.
UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.
The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.
Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.
Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.
Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.
Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic engineers are increasingly relying on AI to write the code behind the company’s products, with senior staff now delegating nearly all programming tasks to AI systems.
Claude Code lead Boris Cherny said he has not written any software by hand for more than two months, with all recent updates generated by Anthropic’s own models. Similar practices are reportedly spreading across internal teams.
Company leadership has previously suggested AI could soon handle most software engineering work from start to finish, marking a shift in how digital products are built and maintained.
The adoption of AI coding tools has accelerated across the technology sector, with firms citing major productivity gains and faster development cycles as automation expands.
Industry observers note the transition may reshape hiring practices and entry-level engineering roles, as AI increasingly performs core implementation tasks previously handled by human developers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.
At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.
He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.
Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.
He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.
He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.
Concern about the economic impact of AI continues to intensify.
Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.
Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.
Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.
He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.
Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta plans to nearly double its AI investment in 2026, according to its latest earnings report. Spending is expected to reach between $115bn and $135bn as the company expands large-scale infrastructure.
Mark Zuckerberg said the investment will focus on data centres needed to train advanced AI models. The strategy is designed to support long-term AI development across Meta’s platforms in the US.
Zuckerberg described 2026 as a pivotal year for AI, with Meta working on multiple products rather than a single launch. Testing is reportedly underway on new models intended to succeed the Llama family in the US.
Meta said building proprietary AI models allows greater control over future products. The company positioned AI as a tool for personal empowerment, setting its approach apart from more centralised automation strategies in the US.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Swiss technology and privacy expert Anna Zeiter is leading the development of W Social, a new European-built social media network designed as an alternative to X. The project aims to reduce reliance on US tech and strengthen European digital sovereignty.
W Social will require users to verify their identity and provide a photo to ensure genuine human accounts, tackling fake profiles and bot driven disinformation that critics link to existing platforms. Zeiter said the name W stands for ‘We’ as well as values and verification.
The platform’s infrastructure will be hosted in Europe under strict EU data protection laws, with decentralised storage and offices planned in Berlin and Paris. Early support comes from European political and tech figures, signalling interest beyond Silicon Valley.
W Social could launch a beta version as early as February, with broader public access planned by year-end. Backers hope the network will foster more positive dialogue and provide a European alternative to US based social media influence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.
The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.
According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.
Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.
The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.
An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic chief executive Dario Amodei has issued a stark warning that superhuman AI could inflict civilisation-level damage unless governments and industry act far more quickly and seriously.
In a forthcoming essay, Amodei argues humanity is approaching a critical transition that will test whether political, social and technological systems are mature enough to handle unprecedented power.
Amodei believes AI systems will soon outperform humans across nearly every field, describing a future ‘country of geniuses in a data centre’ capable of autonomous and continuous creation.
He warns that such systems could rival nation-states in influence, accelerating economic disruption while placing extraordinary power in the hands of a small number of actors.
Among the gravest dangers, Amodei highlights mass displacement of white-collar jobs, rising biological security risks and the empowerment of authoritarian governments through advanced surveillance and control.
He also cautions that AI companies themselves pose systemic risks due to their control over frontier models, infrastructure and user attention at a global scale.
Despite the severity of his concerns, Amodei maintains cautious optimism, arguing that meaningful governance, transparency and public engagement could still steer AI development towards beneficial outcomes.
Without urgent action, however, he warns that financial incentives and political complacency may override restraint during the most consequential technological shift humanity has faced.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!