New ISO 27701 update strengthens privacy compliance

The International Organization for Standardization has released a major update to ISO 27701, the global standard for managing privacy compliance programmes. The revised version, published in 2025, separates the Privacy Information Management System (PIMS) from ISO 27001.

The updated standard introduces detailed clauses defining how organisations should establish, implement and continually improve their PIMS. It places strong emphasis on leadership accountability, risk assessment, performance evaluation and continual improvement.

Annex A of the standard sets out new control tables for both data controllers and processors. The update also refines terminology and aligns more closely with the principles of the EU GDPR and UK GDPR, making it suitable for multinational organisations seeking a unified privacy management approach.

Experts say the revised ISO 27701 offers a flexible structure but should not be seen as a substitute for legal compliance. Instead, it provides a foundation for building stronger, auditable privacy frameworks that align global business operations with evolving regulatory standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mozilla integrates Perplexity AI into Firefox’s search features

Mozilla has announced that it is integrating Perplexity’s AI answer engine into Firefox as a choice available in the browser’s search options.

The feature had already been piloted in markets including the US, UK and Germany. Now Firefox is bringing the option to desktop users globally, with mobile rollout expected in the coming months.

When enabled, Perplexity AI offers conversational search. Instead of just showing a list of web pages, answers appear with citations. Users can activate it via the unified search button in the address bar or by configuring their default search engine settings.

Mozilla says the integration reflects positive feedback from early users and signals a desire to give people more choice in how they get information. The company also notes that Perplexity ‘doesn’t share or sell users’ personal data,’ which aligns with Mozilla’s privacy principles.

Firefox also continues to evolve other browser features. One is profiles, now broadly available, which allows users to maintain separate browser setups (for example, work vs home). The browser is also experimenting with visual search features using Google Lens for users who keep Google as their default provider.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Students design app to support teen mental health

Six students from Blythe Bridge High School in Staffordshire are developing an app to help reduce mental health stigma among young people. Their project, called Mindful Mondays, was chosen as the winner of a national competition organised by the suicide prevention charity the Oli Leigh Trust.

The app aims to create a safe and supportive space where teenagers can talk anonymously about their mental health while completing small challenges designed to improve wellbeing. The team hopes it will encourage open conversations and promote positive habits in schools.

Student Sophie Hodgkinson said many young people struggle in silence due to stigma, while teammate Tilly Hyatt added that young creators understand their peers’ challenges better than adults. Their teacher praised the project as a positive step in addressing one of the biggest issues facing schools.

The Oli Leigh Trust said it hopes the app will inspire further innovation led by young people, empowering students to take an active role in supporting each other’s mental health. Development of Mindful Mondays in the UK is now under way.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers turn to AI for learning but struggle to spot false information

A new Oxford University Press (OUP) report has found that most teenagers are using AI for schoolwork but many cannot tell when information is false. Over 2,000 students aged 13 to 18 took part, with many finding it hard to verify AI content.

Around eight in ten pupils admitted using AI for homework or revision, often treating it as a digital tutor. However, many are simply copying material without being able to check its accuracy.

Assistant headteacher Dan Williams noted that even teachers sometimes struggle to identify AI-generated content, particularly in videos.

Despite concerns about misinformation, most pupils view AI positively. Nine in ten said they had benefited from using it, particularly in improving creative writing, problem-solving and critical thinking.

To support schools, OUP has launched an AI and Education Hub to help teachers develop confidence with the technology, while the Department for Education has released guidance on using AI safely in classrooms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!