OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Britain’s transport future tied to AI investment

AI is expected to play an increasingly important role in improving Britain’s road and rail networks. MPs highlighted its potential during a transport-focused industry summit in Parliament.

The Transport Select Committee chair welcomed government investment in AI and infrastructure. Road maintenance, connectivity and reduced delays were cited as priorities for economic growth.

UK industry leaders showcased AI tools that autonomously detect and repair potholes. Businesses said more intelligent systems could improve reliability while cutting costs and disruption.

Experts warned that stronger cybersecurity must accompany AI deployment. Safeguards are needed to protect critical transport infrastructure from external threats and misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Belgian hospital AZ Monica hit by cyberattack

A cyberattack hit AZ Monica hospital in Belgium, forcing the shutdown of all servers, cancellation of scheduled procedures, and transfer of critical patients. The hospital network, with campuses in Antwerp and Deurne, provides acute, outpatient, and specialised care to the local population.

The attack was detected at 6:32 a.m., prompting staff to disconnect systems proactively. While urgent care continues, non-urgent consultations and surgeries have been postponed due to restricted access to the digital medical record.

Seven critical patients were safely transferred with Red Cross support.

Authorities and hospital officials have launched an investigation, notifying police and prosecutors. Details of the attack remain unclear, and unverified reports of a ransom demand have not been confirmed.

The hospital emphasised that patient safety and continuity of care are top priorities.

Cyberattacks on hospitals can severely disrupt medical services, delay urgent treatments, and put patients’ lives at risk, highlighting the growing vulnerability of healthcare systems to digital threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft disrupts global RedVDS cybercrime network

Microsoft has launched a joint legal action in the US and the UK to dismantle RedVDS, a subscription service supplying criminals with disposable virtual computers for large-scale fraud. The operation with German authorities and Europol seized key domains and shut down the RedVDS marketplace.

RedVDS enabled sophisticated attacks, including business email compromise and real estate payment diversion schemes. Since March 2025, it has caused about US $40 million in US losses, hitting organisations like H2-Pharma and Gatehouse Dock Condominium Association.

Globally, over 191,000 organisations have been impacted by RedVDS-enabled fraud, often combined with AI-generated emails and multimedia impersonation.

Microsoft emphasises that targeting the infrastructure, rather than individual attackers, is key. International cooperation disrupted servers and payment networks supporting RedVDS and helped identify those responsible.

Users are advised to verify payment requests, use multifactor authentication, and report suspicious activity to reduce risk.

The civil action marks the 35th case by Microsoft’s Digital Crimes Unit, reflecting a sustained commitment to dismantling online fraud networks. As cybercrime evolves, Microsoft and partners aim to block criminals and protect people and organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Government IT vulnerabilities revealed by UK public sector cyberattack

A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.

Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.

Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.

Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.

Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

One-click vulnerability in Telegram bypasses VPN and proxy protection

A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.

The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.

Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.

Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.

Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Betterment confirms data breach after social engineering attack

Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.

The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.

The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.

No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.

Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.

Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.

Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!