Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.

Ethernet wins in raw security, but Wi-Fi can compete with the right setup

The way you connect to the internet matters, not just the speed, but also your privacy and security. That’s the main takeaway from a recent Fox News report comparing Ethernet and Wi-Fi security.

At its core, Ethernet is inherently more secure in many scenarios because it requires physical access. Data travels along a cable directly to your router, reducing risks of eavesdropping or intercepting signals mid-air.

Wi-Fi, by contrast, sends data through the air. That makes it more vulnerable, especially if a network uses weak passwords or outdated encryption standards. Attackers within signal range might exploit poorly secured networks.

But Ethernet isn’t a guaranteed fortress. The Fox article emphasises that security depends largely on your entire setup. A Wi-Fi network with strong encryption (ideally WPA3), robust passwords, regular firmware updates, and a well-configured router can approach the network security level of wired connections.

Each device you connect, smartphones, smart home gadgets, IoT sensors, increases your network’s exposure. Wi-Fi amplifies that risk since more devices can join wirelessly. Ethernet limits the number of direct connection points, which reduces the attack surface.

In short, Ethernet gives you a baseline security advantage, but a well-secured Wi-Fi network can be quite robust. The critical factor is how carefully you manage your network settings and devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft ends support for Windows 10

Windows 10 support ends on Tuesday, 14 October 2025, and routine security patches and fixes will no longer be provided. Devices will face increased cyber risk without updates. Microsoft urges upgrades to Windows 11 where possible.

Windows powers more than 1.4 billion devices, with Windows 10 still widely used. UK consumer group Which? estimates 21 million local users. Some plan to continue regardless, citing cost, waste, and working hardware.

Upgrade to Windows 11 is free for eligible PCs via the Settings app. Others can enrol in Extended Security Updates, which deliver security fixes only until October 2026. ESU offers no technical support or feature updates.

Personal users in the European Economic Area can register for ESU at no charge. Elsewhere, eligibility may unlock ESU for free, or it costs $30 or 1,000 Microsoft Rewards points. Businesses pay $61 per device for year one.

Unsupported systems become easier targets for malware and scams, and some software may degrade over time. Organisations risk compliance issues running out-of-support platforms. Privacy-minded users may also dislike Windows 11’s tighter Microsoft account requirements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ID data from 70,000 Discord users exposed in third-party breach

Discord has confirmed that official ID images belonging to around 70,000 users may have been exposed in a cyberattack targeting a third-party service provider. The platform itself was not breached, but hackers targeted a company involved in age verification processes.

The leaked data may include personal information, partial credit card details, and conversations with Discord’s customer service agents. No full credit card numbers, passwords, or activity beyond support interactions were affected. Impacted users have been contacted, and law enforcement is investigating.

The platform has revoked the support provider’s access to its systems and has not named the third party involved. Zendesk, a customer service software supplier to Discord, said its own systems were not compromised and denied being the source of the breach.

Discord has rejected claims circulating online that the breach was larger than reported, calling them part of an attempted extortion. The company stated it would not comply with demands from the attackers. Cybercriminals often sell personal information on illicit markets for use in scams.

ID numbers and official documents are especially valuable because, unlike credit card details, they rarely change. Discord previously tightened its age-verification measures following concerns over the misuse of some servers to distribute illegal material.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Discord incident highlights growing vendor security risks

A September breach at one of Discord’s customer service vendors has exposed user data, highlighting the growing cybersecurity risks associated with third-party providers. Attackers exploited vulnerabilities in the external platform, but Discord’s core systems were not compromised.

Exposed information includes usernames, email addresses, phone numbers, and partial payment details, such as the last four digits of credit card numbers. No full card numbers, passwords, or messages were accessed, which limited the scope of the incident compared to more severe breaches.

Discord revoked the vendor’s system access, launched an investigation, and engaged law enforcement and forensic experts. Only users who contacted support were affected. Individuals impacted are being notified by email and advised to remain vigilant for potential scams.

The incident underscores the growing risk of supply chain attacks, where external service providers become weak points in otherwise well-secured organisations. As companies rely more on vendors, attackers are increasingly targeting these indirect pathways.

Cybersecurity analysts warn that third-party breaches are on the rise amid increasingly sophisticated phishing and AI-enabled scams. Strengthening vendor oversight, improving internal training, and maintaining clear communication with users are seen as essential next steps.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!