Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI accuses Apple and OpenAI of blocking competition in AI

Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.

The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.

The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.

Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.

Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.

Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase CEO fired engineers who refused to adopt AI tools

Coinbase CEO Brian Armstrong has revealed that he fired engineers who refused to begin using AI coding tools after the company adopted GitHub Copilot and Cursor. Armstrong shared the story during a podcast hosted by Stripe co-founder John Collison.

Engineers were told to onboard with the tools within a week. Armstrong arranged a Saturday meeting for those who had not complied and said that employees without valid reasons would be dismissed. Some were excused due to holidays, while others were let go.

Collison raised concerns about relying too heavily on AI-generated code, prompting Armstrong to agree. Past reports have described challenges with managing code produced by AI, even at companies like OpenAI. Coinbase did not comment on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s overuse of the em dash could be your biggest giveaway

AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.

Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.

Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.

Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky shuts down in Mississippi over new age law

Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.

The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.

The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.

US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!