Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.
Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.
Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.
Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.
The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.
Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.
Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.
Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.
Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.
Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.
Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.
The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.
Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Stanford researchers have developed an AI-powered system that combines field surveys, drones, and satellite imagery to identify schistosomiasis risk areas across Senegal.
The project began with fieldwork in Senegal, where researchers collected aquatic vegetation and snails from more than 30 river and estuary sites. The samples helped identify environmental conditions linked to schistosomiasis, which affects about 250 million people worldwide, mostly children in sub-Saharan Africa.
Professor Giulio De Leo of Stanford’s Doerr School of Sustainability said the research required scaling beyond local sampling. ‘The work was necessary to discover these risks, but we can only do so much locally.’
Early support from the Stanford Institute for Human-Centred AI enabled the development of machine learning tools capable of identifying disease-related snails and vegetation in imagery. The system now integrates field observations with drone and satellite data to detect potential infection hotspots.
Researchers say the approach can support public health monitoring and environmental analysis. The machine learning methods developed for the project are also being applied to agriculture, forest monitoring, and mosquito-borne disease research.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is becoming central to industrial networking strategies, but it is also creating new security challenges, according to Cisco’s 2026 State of Industrial AI Report.
Based on a survey of 1,000 professionals across 19 countries and 21 sectors, the report shows organisations view cybersecurity as both a barrier and an opportunity for AI adoption. About 40% cited cybersecurity concerns as a major obstacle, while 48% named security their biggest networking challenge.
At the same time, many organisations believe AI will strengthen their cyber resilience. Cisco noted that ‘while security gaps are limiting AI scale today, organisations view AI as a tool to strengthen detection, monitoring and resilience’.
The report also highlights organisational challenges, particularly collaboration between IT and operational technology teams. Only 20% of organisations report fully collaborative IT and OT cybersecurity operations, despite the growing importance of coordination for AI deployment.
Cisco said industrial AI adoption is accelerating, with 61% of organisations already deploying AI in industrial environments. However, only one in five reports mature, scaled adoption, suggesting many deployments remain in early stages.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OneTrust has entered a new leadership phase in the US after appointing John Heyman as chief executive, replacing founder Kabir Barday. Barday will remain on the board in an advisory role as the US-based compliance technology firm continues to push into AI governance.
John Heyman said organisations across the US and globally are rapidly integrating AI into daily operations. Companies deploying large numbers of AI agents increasingly need tools to manage risk, data use and regulatory compliance.
OneTrust believes demand for governance technology will grow as AI systems multiply inside businesses in the US and worldwide. John Heyman described a future where automated monitoring tools oversee AI agents operating within company systems.
Leadership at OneTrust in the US aims to build systems that track how AI agents collect and share data while maintaining enterprise control. Growing adoption of AI in the US and globally continues to drive demand for responsible governance platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.
From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.
The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.
Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.
Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.
Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.
Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.
The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges.
From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.
While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.
Source: Freepik
The rise of AI ethics: from niche concern to central requirement
The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.
Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks.
The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.
Source: Freepik
Functions of AI ethics: trust, guidance, and societal risk
Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.
For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest.
By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.
Source: Freepik
The politics of AI ethics: regulatory theatre and corporate influence
Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.
Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures.
Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.
Source: Freepik
AI ethics as a lens for technology and society
The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits.
AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.
Source: Freepik
AI ethics as early-warning governance for social impact
AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust.
Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.
Source: Freepik
The bridge between technological power and social legitimacy
AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable.
Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.
Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.
As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.
Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.
A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).
The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.
Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.
With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.
The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.
Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.
Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.
She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.
Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!