Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU adviser backs Android antitrust ruling against Google

An adviser to the Court of Justice of the European Union has supported the EU’s antitrust ruling against Google, recommending the dismissal of its appeal over a €4.1bn fine. The case concerns Google’s use of its Android mobile system to limit competition through pre-installed apps and contractual restrictions.

The original €4.34bn fine was imposed by the European Commission in 2018 and later reduced by the General Court.

Google then appealed to the EU’s top court, but Advocate-General Juliane Kokott concluded that Google’s practices gave it unfair market advantages.

Kokott rejected Google’s argument that its actions should be assessed against an equally efficient competitor, noting Google’s dominance in the Android ecosystem and the robust network effects it enjoys.

She argued that bundling Google Search and Chrome with the Play Store created barriers for competitors.

The final court ruling is expected in the coming months and could shape Google’s future regulatory obligations in Europe. Google has already incurred over €8 billion in the EU antitrust fines across several investigations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp ad rollout in EU slower than global pace amid privacy scrutiny

Meta is gradually rolling out advertising features on WhatsApp globally, starting with the Updates tab, where users follow channels and may see sponsored content.

Although the global rollout remains on track, the Irish Data Protection Commission has indicated that a full rollout across the EU will not occur before 2026. However, this delay reflects ongoing regulatory scrutiny, particularly over privacy compliance.

Concerns have emerged regarding how user data from Meta platforms like Facebook, Instagram, and Messenger might be used to target ads on WhatsApp.

Privacy group NOYB had previously voiced criticism about such cross-platform data use. However, Meta clarified that these concerns are not directly applicable to the current WhatsApp ad model.

According to Meta, integrating WhatsApp with the Meta Account Center—which allows cross-app ad personalization—is optional and off by default.

If users do not link their WhatsApp accounts, only limited data sourced from WhatsApp (such as city, language, followed channels, and ad interactions) will be used for ad targeting in the Updates tab.

Meta maintains that this approach aligns with EU privacy rules. Nonetheless, regulators are expected to carefully assess Meta’s implementation, especially in light of recent judgments against the company’s ‘pay or consent’ model under the Digital Markets Act.

Meta recently reduced the cost of its ad-free subscriptions in the EU, signalling a willingness to adapt—but the company continues to prioritize personalized advertising globally as part of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android users can identify songs with Gemini

Google has updated its Gemini AI app for Android with a voice-activated song search feature, bringing back a capability once available through Google Assistant. Users can now simply ask Gemini ‘What song is this?’ to trigger the music recognition tool.

Once activated, Gemini launches Google’s Song Search using a full-screen interface that listens to ambient audio and displays a pulsing animation. If a match is found, results are shown via Google Search with the track’s details.

The feature improves on Gemini’s earlier version, which only suggested using external apps for music ID. It offers a streamlined alternative to Shazam for Android users, though the interface resets after each use.

Currently, this feature is exclusive to Android devices and not yet available for iPhone users. By integrating this tool, Google continues to unify useful voice functions under its Gemini AI platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok denies buying Trump memecoins after bribe claims

TikTok has strongly denied accusations by US congressman Brad Sherman that its owners purchased $300 million worth of Trump meme coins. Responding via its official policy account on X, the company labelled the claims false and misleading.

Sherman alleged that the memecoin purchase was effectively a bribe to influence Donald Trump’s stance on banning TikTok in the US.

However, the accusations appear based on a report involving GD Culture Group, a Nasdaq-listed company with no direct connection to TikTok or its parent ByteDance.

GD Culture reportedly announced plans to buy Trump coins and Bitcoin while using TikTok to distribute AI-enhanced content. Despite this, no financial link between the firm and Trump or TikTok has been confirmed.

The timing of the claim coincides with Trump’s third delay in enforcing the TikTok ban, raising further political speculation. Sherman, a long-time crypto critic, also said that Trump’s crypto ventures threaten the US dollar’s dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers gain control of tesla charger Through firmware downgrade

Tesla’s popular Wall Connector home EV charger was compromised at the January 2025 Pwn2Own Automotive competition, revealing how attackers could gain full control via the charging cable.

The Tesla Wall Connector Gen 3, a widely deployed residential AC charger delivering up to 22 kW, was exploited through a novel attack that used the physical charging connector as the main entry point.

The vulnerability allowed researchers to execute arbitrary code, potentially giving access to private networks in homes, hotels, or businesses.

Researchers from Synacktiv discovered that Tesla vehicles can update the Wall Connector’s firmware via the charging cable using a proprietary, undocumented protocol.

By simulating a Tesla car and exploiting Single-Wire CAN (SWCAN) communications over the Control Pilot line, the team downgraded the firmware to an older version with exposed debug features.

Using a custom USB-CAN adapter and a Raspberry Pi to emulate vehicle behaviour, they accessed the device’s setup Wi-Fi credentials and triggered a buffer overflow in the debug shell, ultimately gaining remote code execution.

The demonstration ended with a visual cue — the charger’s LED blinking — but the broader implication is access to internal networks and potential lateral movement across connected systems.

Tesla has since addressed the vulnerability by introducing anti-downgrade measures in newer firmware versions. The Pwn2Own event remains instrumental in exposing critical flaws in automotive and EV infrastructure, pushing manufacturers toward stronger security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s Gen Z founders go viral with AI and robotics ‘Hacker House’ in Bengaluru

A viral video has captured the imagination of tech enthusiasts by offering a rare look inside a ‘Hacker House’ in Bengaluru’s HSR Layout, where a group of Gen Z Indian founders are quietly shaping the future of AI and robotics.

Spearheaded by Localhost, the initiative provides young developers aged 16 to 22 with funding, workspace, and a collaborative environment to rapidly build real-world tech products — no media hype, just raw innovation.

The video, shared by Canadian entrepreneur Caleb Friesen, shows teenage coders intensely focused on their projects. From AI-powered noise-cancelling systems and assistive robots to innovative real estate and podcasting tools, each room in the shared house hums with creativity.

The youngest, 16-year-old Harish, stands out for his deep focus, while Suhas Sumukh, who leads the Bengaluru chapter, acts as both a guide and mentor.

Rather than pitch decks and polished PR, what resonated online was the authenticity and dedication. Caleb’s walk-through showed residents too engrossed in their work to acknowledge his arrival.

Viewers responded with admiration, calling it a rare glimpse into ‘the real future of Indian tech’. The video has since crossed 1.4 million views, sparking global curiosity.

At the heart of the movement is Localhost, founded by Kei Hayashi, which helps young developers build fast and learn faster.

As demand grows for similar hacker houses in Mumbai, Delhi, and Hyderabad, the initiative may start a new chapter for India’s startup ecosystem — fuelled by focus, snacks, and a poster of Steve Jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!