Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media overtakes TV as main news source in the US

Social media and video platforms have officially overtaken traditional television and news websites as the primary way Americans consume news, according to new research from the Reuters Institute. Over half of respondents (54%) now turn to platforms like Facebook, YouTube, and X (formerly Twitter) for their news, surpassing TV (50%) and dedicated news websites or apps (48%).

The study highlights the growing dominance of personality-driven news, particularly through social video, with figures like podcaster Joe Rogan reaching nearly a quarter of the population weekly. That shift poses serious challenges for traditional media outlets as more users gravitate toward influencers and creators who present news in a casual or partisan style.

There is concern, however, about the accuracy of this new media landscape. Nearly half of global respondents identify online influencers as major sources of false or misleading information, on par with politicians.

At the same time, populist leaders are increasingly using sympathetic online hosts to bypass tough questions from mainstream journalists, often spreading unchecked narratives. The report also notes a rise in AI tools for news consumption, especially among Gen Z, though public trust in AI’s ability to deliver reliable news remains low.

Despite the rise of alternative platforms like Threads and Mastodon, they’ve struggled to gain traction. Even as user habits change, one constant remains: people still value reliable news sources, even if they turn to them less often.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India urges preference for state telecom providers

The Department of Telecommunications (DoT) in India has introduced a policy urging all state governments and Union Territories to prioritise state-run telecom operators Bharat Sanchar Nigam Ltd (BSNL) and Mahanagar Telephone Nigam Ltd (MTNL) for their communication needs. Although not legally binding, that policy directive emphasises data security as a key reason for favouring these public sector providers.

DoT Secretary underscored the increasing competitiveness of BSNL and MTNL, noting that BSNL now manages MTNL’s operations and will set up a dedicated nodal point to cater to state governments efficiently. The move marks a significant strategic shift toward promoting state-owned telecom companies in government communications.

The policy has raised concerns among private telecom companies, who fear losing valuable government contracts to BSNL and MTNL. Private providers currently hold over 92% of the market’s revenue, and government contracts are especially important for smaller ISPs with tight margins. Diverting these contracts could significantly hurt their financial stability.

BSNL and MTNL were initially created to operate independently and compete fairly with private firms. This new policy, favouring them, risks undermining that independence and disrupting the telecom sector’s competitive balance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!