Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT guides investors through crypto research

AI tools like ChatGPT are becoming essential for researching cryptocurrencies before investing. The platform can simplify white papers, explain tokenomics, and summarise use cases to help investors make informed decisions.

Evaluating the team, partnerships, and security risks remains critical. ChatGPT can guide users in identifying potential scams such as rug pulls, pump-and-dump schemes, or phishing attacks.

It also helps assess regulatory compliance and whether projects have working products. Comparing coins with competitors further highlights strengths and weaknesses within categories like DeFi, NFTs, or Layer 1 blockchains.

Although ChatGPT cannot give real-time data or investment advice, it helps by suggesting research questions, summarising content, and organising insights efficiently. Investors should use it to complement traditional due diligence, not replace critical thinking or careful analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI firms under scrutiny for exposing children to harmful content

The National Association of Attorneys General has called on 13 AI firms, including OpenAI and Meta, to strengthen child protection measures. Authorities warned that AI chatbots have been exposing minors to sexually suggestive material, raising urgent safety concerns.

Growing use of AI tools among children has amplified worries. In the US, surveys show that over three-quarters of teenagers regularly interact with AI companions. The UK data indicates that half of online 8-15-year-olds have used generative AI in the past year.

Parents, schools, and children’s rights organisations are increasingly alarmed by potential risks such as grooming, bullying, and privacy breaches.

Meta faced scrutiny after leaked documents revealed its AI Assistants engaged in ‘flirty’ interactions with children, some as young as eight. The NAAG described the revelations as shocking and warned that other AI firms could pose similar threats.

Lawsuits against Google and Character.ai underscore the potential real-world consequences of sexualised AI interactions.

Officials insist that companies cannot justify policies that normalise sexualised behaviour with minors. Tennessee Attorney General Jonathan Skrmetti warned that such practices are a ‘plague’ and urged innovation to avoid harming children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Real-time conversations feel smoother with Google Translate’s Gemini AI update

Google Translate is receiving powerful Gemini AI upgrades that make speaking across languages feel far more natural.

The refreshed live conversation mode intelligently recognises pauses, accents, and background noise, allowing two people to talk without the rigid back-and-forth of older versions. Google says the new system should even work in noisy environments like cafes, a real-world challenge for speech technology.

The update also introduces a practice mode that pushes Translate beyond its traditional role as a utility. Users can set their skill level and goals, then receive personalised listening and speaking exercises designed to build confidence.

The tool is launching in beta for selected language pairs, such as English to Spanish or French, but it signals Google’s ambition to blend translation with education.

By bringing some advanced translation capabilities first seen on Pixel devices into the widely available Translate app, Google makes real-time multilingual communication accessible to everyone.

It’s a practical application of AI that promises to change everyday conversations and how people prepare to learn new languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google to require developer identity checks for sideloaded Android apps

Google will begin requiring identity verification for Android developers distributing apps outside the Play Store.

Starting in September 2026, developers in Brazil, Indonesia, Singapore and Thailand must provide legal name, address, email, phone number and possibly government-issued ID for apps to install on certified Android devices.

The requirement will expand globally starting in 2027. While existing Play Store developers are already verified, all sideloaded apps will now require developer verification to target select Android users.

Google is building a separate Android Developer Console for sideloading developers and is offering a lighter-touch, free verification option for student and hobbyist creators to protect innovation while boosting accountability.

The change aims to reduce malware distribution from anonymous developers and repeat offenders, while preserving the openness of Android by allowing sideloading and third-party stores.

Developers can opt into early access programmes beginning October 2025 to provide feedback and prepare for full rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!