Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Financial crime risks are reshaped by the rise of autonomous AI agents

Autonomous AI agents are transforming finance by executing transactions independently and speeding up workflows in digital assets and programmable finance. Software can manage wallets and move funds across blockchains in seconds, narrowing detection windows.

AI agents don’t create new crimes but increase speed and complexity, making accountability essential. Responsibility rests with developers, operators, and beneficiaries, with investigators tracing control, configuration, and economic benefit to determine liability.

Weak oversight or misconfigured rules can lead to significant compliance and enforcement consequences.

Investigations face new challenges as autonomous agents operate across multiple blockchains, decentralised exchanges, and global jurisdictions.

Real-time analytics and automated tracing are essential to link transactions to accountable actors before funds move. Governance architecture and monitoring systems increasingly serve as evidence in regulatory or criminal actions.

Institutions and law enforcement are using AI monitoring, anomaly detection, and automated containment systems. Autonomous AI impacts sanctions and national security, emphasising the need for human oversight alongside automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Galaxy S26 series brings powerful AI and privacy features

Samsung Electronics has unveiled the Galaxy S26 series, featuring advanced AI experiences, powerful performance, and an industry-leading camera system designed to simplify everyday smartphone tasks.

The series, which includes the Galaxy S26, S26+, and S26 Ultra, handles complex processes in the background, allowing users to focus on results rather than device operations.

The Galaxy S26 Ultra introduces the world’s first built-in Privacy Display, a redesigned chipset, and improved thermal management. Together, these upgrades enhance AI performance, graphics, and CPU efficiency, while ensuring faster, cooler, and more reliable operation throughout the day.

Photography and videography are also upgraded with wider apertures, Nightography Video, Super Steady video, and AI-powered editing tools that make professional-quality content accessible to all users.

Galaxy AI streamlines daily experiences by proactively suggesting actions, organising information, and automating tasks. Features such as Now Nudge, Now Brief, Circle to Search, and upgraded Bixby allow users to interact naturally with their devices.

Integrated AI agents, including Gemini and Perplexity, support multi-step tasks across apps, from booking services to advanced searches, all with minimal input.

Samsung has embedded multiple layers of security and privacy in the Galaxy S26 series. From AI-powered Call Screening and Privacy Alerts to Knox Vault, Knox Matrix, and post-quantum cryptography, users can control data access and protect personal information.

With long-term security updates, seamless software, and Galaxy Buds4 integration, the S26 series aims to combine performance, convenience, and safety in a single, intuitive device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vietnamese AI firm Namitech provides live translation support at Nikkei Digital Forum in Asia 2026

At the Nikkei Digital Forum in Asia 2026, held in Vietnam, local technology company Namitech showcased its AI-powered translation platform to deliver real-time language support for delegates, speakers and international attendees.

The system aimed to automatically translate speeches and discussions across languages such as Vietnamese, English and Japanese, enhancing accessibility and communication in a multilingual business context.

Namitech’s AI solution combines speech-to-text, natural language processing and translation models to provide near-instant interpretation, reducing reliance on traditional human interpreters and lowering language barriers at high-profile forums.

Organisers and participants highlighted the convenience and effectiveness of the tool, noting smoother engagement for non-native speakers and more inclusive participation.

The deployment reflects broader regional interest in AI-driven language technologies to support business, diplomacy and cross-border collaboration in Asia.

It also highlights Vietnam’s growing role in domestic AI development and its integration into international platforms, aligning with efforts to adopt digital tools that facilitate global dialogue and economic integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy orders Amazon to stop processing sensitive employee data after privacy ruling

The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.

Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.

Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.

The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.

Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.

An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.

Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU moves to enforce digital fairness rules with stronger consumer oversight

Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.

An initiative that forms part of a broader effort to ensure stronger consumer protection across digital markets, with officials signalling stricter oversight of commercial practices that disadvantage users.

The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.

The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.

Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.

The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.

Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.

Stronger consumer safeguards are becoming central to the digital agenda of the EU.

The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!