Ransomware attack at DaVita exposes data of 2.7 million patients in the US

A ransomware attack against dialysis provider DaVita has exposed the personal data of 2.7 million people, according to a notice on the US health department’s website.

The company first disclosed the cyber incident in April, saying it had taken steps to restore operations but could not predict the scale of disruption.

DaVita confirmed that hackers gained unauthorised access to its laboratory database, which contained sensitive information belonging to some current and former patients. The firm said it is now contacting those affected and offering free credit monitoring to help protect against identity theft.

Despite the intrusion, DaVita maintained uninterrupted dialysis services across its network of nearly 3,000 outpatient clinics and home treatment programmes. The company described the cyberattack as a temporary disruption but stressed that patient care was never compromised.

Financial disclosures show the incident led to around $13.5 million in charges during the second quarter of 2025. Most of the costs were linked to system restoration and third-party support, with $1 million attributed to higher patient care expenses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students seek emotional support from AI chatbots

College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.

Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.

Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 criticised for lacking flair as users seek older ChatGPT options

OpenAI’s rollout of GPT-5 has faced criticism from users attached to older models, who say the new version lacks the character of its predecessors.

GPT-5 was designed as an all-in-one model, featuring a lightweight version for rapid responses and a reasoning version for complex tasks. A routing system determines which option to use, although users can manually select from several alternatives.

Modes include Auto, Fast, Thinking, Thinking mini, and Pro, with the last available to Pro subscribers for $200 monthly. Standard paid users can still access GPT-4o, GPT-4.1, 4o-mini, and even 3o through additional settings.

Chief executive Sam Altman has said the long-term goal is to give users more control over ChatGPT’s personality, making customisation a solution to concerns about style. He promised ample notice before permanently retiring older models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to replace Assistant with Gemini in smart home devices

Google has announced that Gemini will soon power its smart home platform, replacing Google Assistant on existing Nest speakers and displays from October. The feature will launch initially as an early preview.

Gemini for Home promises more natural conversations and can manage complex household tasks, including controlling smart devices, creating calendars, and handling lists or timers through natural language commands. It will also support Gemini Live for ongoing dialogue.

Google says the upgrade is designed to serve all household members and visitors, offering hands-free help and integration with streaming platforms. The move signals a renewed focus on Google Home, a product line that has been largely overlooked in recent years.

The announcement hints at potential new hardware, given that Google’s last Nest Hub was released in 2021 and the Nest Audio speaker dates back to 2020.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta freezes hiring as AI costs spark investor concern

Meta has frozen hiring in its AI division, halting a spree that had drawn top researchers with lucrative offers. The company described the pause as basic organisational planning, aimed at building a more stable structure for its superintelligence ambitions.

The freeze, first reported by the Wall Street Journal, began last week and prevents employees in the unit from transferring to other teams. Its duration has not been communicated, and Meta declined to comment on the number of hires already made.

The decision follows growing tensions inside the newly created Superintelligence Labs, where long-serving researchers have voiced concerns over disparities in pay and recognition compared with recruits.

Alexandr Wang, who leads the division, recently told staff that superintelligence is approaching and that significant changes are necessary to prepare. His email outlined Meta’s most significant reorganisation of its AI efforts.

The pause also comes amid investor scrutiny, as analysts warn that heavy reliance on stock-based compensation to attract talent could fuel innovation or dilute shareholder value without precise results.

Despite these concerns, Meta’s stock has risen by about 28% since the start of the year, reflecting continued investor confidence in the company’s long-term prospects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta urged to ban child-like chatbots amid Brazil’s safety concerns

Brazil’s Attorney General (AGU) has formally requested Meta to remove AI-powered chatbots that simulate childlike profiles and engage in sexually explicit dialogue, citing concerns that they ‘promote the eroticisation of children.’

The demand was made via an ‘extrajudicial notice,’ recalling that platforms must remove illicit content without a court order, especially when it involves potential harm to minors.

Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.

While no direct sanctions were announced, the AGU emphasised that tech platforms must proactively manage harmful or inappropriate AI-generated content.

The move follows Brazil’s Supreme Court decision in June, which increased companies’ obligations to remove user-generated illicit content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI-generated responses flooding research platforms

Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.

Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.

Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.

In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.

Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexon investigates AI-generated TikTok ads for The First Descendant

Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.

One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.

The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.

While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.

Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.

The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.

The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!