Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France appeals porn site ruling based on EU legal grounds

The French government is challenging a recent decision by the Administrative Court of Paris that temporarily halted the enforcement of mandatory age verification on pornographic websites based in the EU. The court found France’s current approach potentially inconsistent with the EU law—specifically the 2002 E-Commerce Directive—which upholds the ‘country-of-origin’ principle.

That rule limits an EU country’s authority to regulate online services hosted in another member state unless it follows a formal process involving both the host country and the European Commission. The dispute’s heart is whether France correctly followed the required legal steps.

While French authorities say they notified the host countries of porn companies like Hammy Media (Xhamster) and Aylo (owner of Pornhub and others) and waited the mandated three months, legal experts argue that notifying the Commission is also essential. So far, there is no confirmation that this additional step was taken, which may weaken France’s legal standing.

Digital Minister Clara Chappaz reaffirmed the government’s commitment to enforcing age checks, calling it a ‘priority’ in a public statement. The ministry insists its rules align with the EU’s Audiovisual Media Services Directive.

However, the court’s ruling highlights broader tensions between France’s national digital regulations and overarching the EU law. Similar legal challenges have already forced France to adjust parts of its digital, influencer, and cloud regulation frameworks in the past two years.

The appeal could have significant implications for age restrictions on adult content and how France asserts digital sovereignty within the EU. If the court upholds the suspension, other digital regulations based on national initiatives may also be vulnerable to legal scrutiny under the EU principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated photo falsely claims to show a downed Israeli jet

Following Iranian state media claims that its forces shot down two Israeli fighter jets, an image circulated online falsely purporting to show the wreckage of an F-35.

The photo, which shows a large jet crash-landing in a desert, quickly spread across platforms like Threads and South Korean forums, including Aagag and Ruliweb. An Israeli official dismissed the shootdown claim as ‘fake news’.

The image’s caption in Korean read: ‘The F-35 shot down by Iran. Much bigger than I thought.’ However, a detailed AFP analysis found the photo contained several hallmarks of AI generation.

People near the aircraft appear the same size as buses, and one vehicle appears to merge with the road — visual anomalies common in synthetic images.

In addition to size distortions, the aircraft’s markings did not match those used on actual Israeli F-35s. Lockheed Martin specifications confirm the F-35 is just under 16 metres long, unlike the oversized version shown in the image.

Furthermore, the wing insignia in the image differed from the Israeli Air Force’s authentic emblem.

Amid escalating tensions between Iran and Israel, such misinformation continues to spread rapidly. Although AI-generated content is becoming more sophisticated, inconsistencies in scale, symbols, and composition remain key indicators of digital fabrication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France 24 partners with Mediagenix to streamline on-demand programming

Mediagenix has entered a collaboration with French international broadcaster France 24, operated by France Médias Monde, to support its content scheduling modernisation programme.

As part of the upgrade, France 24 will adopt Mediagenix’s AI-powered, cloud-based scheduling solution to manage content across its on-demand platforms. The system promises improved operational flexibility, enabling rapid adjustments to programming in response to major events and shifting editorial priorities.

Pamela David, Engineering Manager for TV and Systems Integration at France Médias Monde, said: ‘This partnership with Mediagenix is a critical part of equipping our France 24 channels with the best scheduling and content management solutions.’

‘The system gives our staff the ultimate flexibility to adjust schedules as major events happen and react to changing news priorities.’

Françoise Semin, Chief Commercial Officer at Mediagenix, added: ‘France Médias Monde is a truly global broadcaster. We are delighted to support France 24’s evolving scheduling needs with our award-winning solution.’

Training for France 24 staff will be provided by Lapins Bleus Formation, based in Paris, ahead of the system’s planned rollout next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI voice chat in Search app for Android and iOS

Google has started rolling out its new ‘Search Live in AI Mode’ for the Google app on Android and iOS, offering users the ability to have seamless voice-based conversations with Search.

Currently available only in the US for those signed up to the AI Mode experiment in Labs, the feature was previewed at last month’s Google I/O conference.

The tool uses a specially adapted version of Google’s Gemini AI model, fine-tuned to deliver smarter voice interactions. It combines the model’s capabilities with Google Search’s information infrastructure to provide real-time spoken responses.

Using a technique called ‘query fan-out’, the system retrieves a wide range of web content, helping users discover more varied and relevant information.

The new mode is particularly useful when multitasking or on the go. Users can tap a ‘Live’ icon in the Google app and ask spoken queries like how to keep clothes from wrinkling in a suitcase.

Follow-up questions are handled just as naturally, and related links are displayed on-screen, letting users read more without breaking their flow.

To use the feature, users can tap a sparkle-shaped waveform icon under the Search bar or next to the search field. Once activated, a full-screen interface appears with voice control options and a scrolling list of relevant links.

Even with the phone locked or other apps open, the feature keeps running. A mute button, transcript view, and voice style settings—named Cassini, Cosmo, Neso, and Terra—offer additional control over the experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!