Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam accelerates modernization of foreign affairs through technology and AI

The Ministry of Foreign Affairs of Vietnam spearheads an extensive digital transformation initiative in line with the Politburo’s Resolution No. 57-NQ/TW issued in December 2024. This resolution highlights the necessity of advancements in science, technology, and national digital transformation.

Under the guidance of Deputy Prime Minister and Minister Bui Thanh Son, the Ministry is committed to modernising its operations and improving efficiency, reflecting Vietnam’s broader digital evolution strategy across all sectors.

Key implementations of this transformation include the creation of three major digital platforms: an electronic information portal providing access to foreign policies and online public services, an online document management system for internal digitalisation, and an integrated data-sharing platform for connectivity and multi-dimensional data exchange.

The Ministry has digitised 100% of its administrative procedures, linking them to a national-level system, showcasing a significant stride towards administrative reform and efficiency. Additionally, the Ministry has fully adopted social media channels, including Facebook and Twitter, indicating its efforts to enhance foreign information dissemination and public engagement.

A central component of this initiative is the ‘Digital Literacy for All’ movement, inspired by President Ho Chi Minh’s historic ‘Popular Education’ campaign. This movement focuses on equipping diplomatic personnel with essential digital skills, transforming them into proficient ‘digital civil servants’ and ‘digital ambassadors.’ The Ministry aims to enhance its diplomatic functions in today’s globally connected environment by advancing its ability to navigate and utilise modern technologies.

The Ministry plans to develop its digital infrastructure further, strengthen data management, and integrate AI for strategic planning and predictive analysis.

Establishing a digital data warehouse for foreign information and enhancing human resources by nurturing technology experts within the diplomatic sector are also on the agenda. These actions reflect a strong commitment to fostering a professional and globally adept diplomatic industry, poised to safeguard national interests and thrive in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Silicon Valley moves to influence AI policy

Silicon Valley insiders are preparing to pour over $100 million into next year’s US midterm elections to influence AI policy. The super-PAC Leading the Future, backed by Andreessen Horowitz and Greg Brockman, seeks to impact AI policy and limit strict regulation.

Leading the Future targets battleground states such as California, New York, Illinois, and Ohio. The PAC intends to fund campaigns, run extensive social media ads, and focus on politicians who support innovation-friendly ‘guardrails’ rather than heavy-handed regulation.

The initiative draws inspiration from the crypto industry’s political playbook, which successfully backed candidates aligned with its interests.

The group’s structure combines federal and state PACs with a 501(c)(4) organisation, offering flexibility and influence over both major parties. High-profile backers include Marc Andreessen, Greg Brockman, Joe Lonsdale, and Ron Conway.

Their collective goal is to ensure AI development continues without regulatory barriers that could slow American innovation and job creation.

Silicon Valley’s strategy highlights the increasing role of tech money in politics, reflecting a shift in donor priorities. The PAC’s influence may become a decisive factor in shaping AI legislation, with potential implications for the industry and broader US policy debates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CZ outlines vision for crypto and AI integration

Binance founder Changpeng ‘CZ’ Zhao shared his vision for crypto’s future, connecting digital assets with AI and recent policy changes. At WebX in Tokyo, CZ praised US crypto policy under Trump, highlighting stablecoin legislation and the Genius Act while opposing central bank digital currencies.

He argued that embracing innovation is crucial to remaining competitive globally.

CZ predicted that crypto will become the natural medium of exchange for AI, bypassing traditional fiat, banks, and credit cards. He envisaged hundreds or thousands of AI agents per person, generating a surge of microtransactions via programmable blockchain networks.

According to CZ, blockchains’ APIs are better suited than banks for interfacing with AI-driven economic activity.

Since stepping down from Binance, CZ has focused on education and advisory work. His Giggle Academy already serves 50,000 children, aiming to digitise 18 years of schooling at a fraction of government costs.

He advises at least 12 governments on crypto regulation and adoption. He also plans to mentor founders and back early-stage projects through his investment firm EZ Labs, emphasising ethical practices and long-term value creation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!