Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wider AI applications take centre stage at Japan’s CEATEC electronics show

At this year’s CEATEC exhibition in Japan, more companies and research institutions are promoting AI applications that stretch well beyond traditional factory or industrial automation.

Innovations on display suggest an increasing emphasis on ‘AI as companion’ systems, tools that help, advise, or augment human abilities in everyday settings.

Fujitsu’s showcase is a strong example. The company is using AI skeleton recognition and agent-based analysis to help people improve movement, whether for sports performance (such as refining a golf swing) or for healthcare settings. These systems give live feedback, coaching form, and offer suggestions, all in real time.

Other exhibits combine sensor tech, vision, and AI in consumer-friendly ways. For example, smart fridge compartments that monitor produce, earbuds or glasses that recognise real-world context (a flyer in a shop, say) and suggest recipes, or wearable systems that adapt to your motion.

These are not lab demos, they’re meant for direct, everyday interaction. Rising numbers of startups and university groups at CEATEC underscore Japan’s push toward embedding AI deeply in daily life.

The ‘AI for All’ theme and ‘Partner Parks’ at the show reflect a movement toward socially oriented technologies, with suggestions, health, ease, and personalisation. Japan seems to be leaning into AI not just for productivity gains but for lifestyle and well-being enhancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US seizes $15 billion crypto from Cambodia fraud ring

US federal prosecutors have seized $15 billion in cryptocurrency tied to a large-scale ‘pig butchering’ investment scam linked to forced labour compounds in Cambodia. Officials said it marks the biggest crypto forfeiture in Justice Department history.

Authorities charged Chinese-born businessman Chen Zhi, founder of the Prince Group, with money laundering and wire fraud. Chen allegedly used the conglomerate as cover for criminal operations that laundered billions through fake crypto investments. He remains at large.

Investigators say Chen and his associates operated at least ten forced labour sites in Cambodia where victims, many coerced workers, managed thousands of fake social media accounts to lure targets into fraudulent investment schemes.

The US Treasury also imposed sanctions on dozens of Prince Group affiliates, calling them transnational criminal organisations. FBI officials said the scam is part of a wider wave of crypto fraud across Southeast Asia, urging anyone targeted by online investment offers to contact authorities immediately.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Students design app to support teen mental health

Six students from Blythe Bridge High School in Staffordshire are developing an app to help reduce mental health stigma among young people. Their project, called Mindful Mondays, was chosen as the winner of a national competition organised by the suicide prevention charity the Oli Leigh Trust.

The app aims to create a safe and supportive space where teenagers can talk anonymously about their mental health while completing small challenges designed to improve wellbeing. The team hopes it will encourage open conversations and promote positive habits in schools.

Student Sophie Hodgkinson said many young people struggle in silence due to stigma, while teammate Tilly Hyatt added that young creators understand their peers’ challenges better than adults. Their teacher praised the project as a positive step in addressing one of the biggest issues facing schools.

The Oli Leigh Trust said it hopes the app will inspire further innovation led by young people, empowering students to take an active role in supporting each other’s mental health. Development of Mindful Mondays in the UK is now under way.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokens-at-scale with Intel’s Crescent Island and Xe architecture

Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.

Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.

Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.

Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.

Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Abu Dhabi deploys AI-first systems with NVIDIA and Oracle

Oracle and NVIDIA have joined forces to advance sovereign AI, supporting Abu Dhabi’s vision of becoming an AI-native government by 2027.

The partnership combines the computing platforms of NVIDIA with Oracle Cloud Infrastructure to create secure, high-performance systems that deliver next-generation citizen services, including multilingual AI assistants, automatic notifications, and intelligent compliance solutions.

The Government Digital Strategy 2025-2027 of Abu Dhabi, backed by a 13-billion AED investment, follows a phased ‘crawl, walk, run’ approach. The initiative has already gone live across 25 government entities, enabling over 15,000 daily users to access AI-accelerated services.

Generative AI applications are now integrated into human resources, procurement, and financial reporting, while advanced agentic AI and autonomous workflows will further enhance government-wide operations.

The strategy ensures full data sovereignty while driving innovation and efficiency across the public sector.

Partnerships with Deloitte and Core42 provide infrastructure and compliance support, while over 200 AI-powered capabilities are deployed to boost digital skills, economic growth, and employment opportunities.

By 2027, the initiative is expected to contribute more than 24 billion AED to Abu Dhabi’s GDP and create over 5,000 jobs, demonstrating a global blueprint for AI-native government transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!