Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI voice chat in Search app for Android and iOS

Google has started rolling out its new ‘Search Live in AI Mode’ for the Google app on Android and iOS, offering users the ability to have seamless voice-based conversations with Search.

Currently available only in the US for those signed up to the AI Mode experiment in Labs, the feature was previewed at last month’s Google I/O conference.

The tool uses a specially adapted version of Google’s Gemini AI model, fine-tuned to deliver smarter voice interactions. It combines the model’s capabilities with Google Search’s information infrastructure to provide real-time spoken responses.

Using a technique called ‘query fan-out’, the system retrieves a wide range of web content, helping users discover more varied and relevant information.

The new mode is particularly useful when multitasking or on the go. Users can tap a ‘Live’ icon in the Google app and ask spoken queries like how to keep clothes from wrinkling in a suitcase.

Follow-up questions are handled just as naturally, and related links are displayed on-screen, letting users read more without breaking their flow.

To use the feature, users can tap a sparkle-shaped waveform icon under the Search bar or next to the search field. Once activated, a full-screen interface appears with voice control options and a scrolling list of relevant links.

Even with the phone locked or other apps open, the feature keeps running. A mute button, transcript view, and voice style settings—named Cassini, Cosmo, Neso, and Terra—offer additional control over the experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ryuk ransomware hacker extradited to US after arrest in Ukraine

A key member of the infamous Ryuk ransomware gang has been extradited to the US after his arrest in Kyiv, Ukraine.

The 33-year-old man was detained in April 2025 at the request of the FBI and arrived in the US on 18 June to face multiple charges.

The suspect played a critical role within Ryuk by gaining initial access to corporate networks, which he then passed on to accomplices who stole data and launched ransomware attacks.

Ukrainian authorities identified him during a larger investigation into ransomware groups like LockerGoga, Dharma, Hive, and MegaCortex that targeted companies across Europe and North America.

According to Ukraine’s National Police, forensic analysis revealed the man’s responsibility for locating security flaws in enterprise networks.

Information gathered by the hacker allowed others in the gang to infiltrate systems, steal data, and deploy ransomware payloads that disrupted various industries, including healthcare, during the COVID pandemic.

Ryuk operated from 2018 until mid-2020 before rebranding as the notorious Conti gang, which later fractured into several smaller but still active groups. Researchers estimate that Ryuk alone collected over $150 million in ransom payments before shutting down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Meta smart glasses target sports enthusiasts

Meta is set to launch a new pair of AI-powered smart glasses under the Oakley brand, targeting sports users. Scheduled for release on 20 June, the glasses mark an expansion of Meta’s partnership with eyewear giant EssilorLuxottica.

Oakley’s sporty design and outdoor functionality make it ideal for active users, a market Meta aims to capture with this launch. The glasses will feature a central camera and likely retail for around $360.

This follows the success of Meta’s Ray-Ban smart glasses, which include AI assistant integration and hands-free visual capture. Over two million pairs have been sold since 2023, according to EssilorLuxottica’s CEO.

Meta CEO Mark Zuckerberg continues to push smart eyewear as a long-term replacement for smartphones. With high-fashion Prada smart glasses also in development, Meta is betting on wearable tech becoming the next frontier in computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva rolls out text-to-video tool for creators

Canva has launched a new tool powered by Google’s Veo 3 model, allowing users to generate short cinematic video clips using simple text prompts. Known as ‘Create a Video Clip’, the feature produces eight-second videos with sound directly inside the Canva platform.

This marks one of the first commercial uses of Veo 3, which debuted last month. The AI tool is available to Canva Pro, Teams, Enterprise and Nonprofit users, who can generate up to five clips per month initially.

Danny Wu, Canva’s head of AI products, said the feature simplifies video creation with synchronised dialogue, sound effects and editing options. Users can integrate the clips into presentations, social media designs or other formats via Canva’s built-in video editor.

Canva is also extending the tool to users of Leonardo.Ai, a related image generation service. The feature is protected by Canva Shield, a content moderation and indemnity framework aimed at enterprise-level security and trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI distorting our view of the Milky Way’s black hole?

A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.

The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.

Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.

By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.

However, not all physicists are confident in the results.

Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.

The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media overtakes TV as main news source in the US

Social media and video platforms have officially overtaken traditional television and news websites as the primary way Americans consume news, according to new research from the Reuters Institute. Over half of respondents (54%) now turn to platforms like Facebook, YouTube, and X (formerly Twitter) for their news, surpassing TV (50%) and dedicated news websites or apps (48%).

The study highlights the growing dominance of personality-driven news, particularly through social video, with figures like podcaster Joe Rogan reaching nearly a quarter of the population weekly. That shift poses serious challenges for traditional media outlets as more users gravitate toward influencers and creators who present news in a casual or partisan style.

There is concern, however, about the accuracy of this new media landscape. Nearly half of global respondents identify online influencers as major sources of false or misleading information, on par with politicians.

At the same time, populist leaders are increasingly using sympathetic online hosts to bypass tough questions from mainstream journalists, often spreading unchecked narratives. The report also notes a rise in AI tools for news consumption, especially among Gen Z, though public trust in AI’s ability to deliver reliable news remains low.

Despite the rise of alternative platforms like Threads and Mastodon, they’ve struggled to gain traction. Even as user habits change, one constant remains: people still value reliable news sources, even if they turn to them less often.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!