OpenAI rejects Robinhood’s token offering

OpenAI has publicly disavowed Robinhood’s decision to sell so-called ‘OpenAI tokens’, warning that these blockchain-based contracts do not offer real equity in the company.

In a statement posted on X, OpenAI made clear that it had not approved, endorsed, or participated in the initiative and emphasised that any equity transfer requires its direct consent.

Robinhood recently announced plans to offer tokenised access to private firms like OpenAI and SpaceX for investors in the EU. The tokens do not represent actual shares but mimic price movements using blockchain contracts.

Despite OpenAI’s sharp rejection, Robinhood’s stock surged to record highs following the announcement.

A Robinhood spokesperson later claimed the tokens were linked to a special purpose vehicle (SPV) that owns OpenAI shares, though SPVs do not equate to direct ownership either.

The company said the move aims to give everyday investors indirect exposure to high-profile startups through digital contracts.

Robinhood CEO Vlad Tenev defended the strategy on X, saying the token sale was just the beginning of a broader effort to democratise access to private markets.

OpenAI, meanwhile, declined to comment further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek gains business traction despite security risks

Chinese AI company DeepSeek is gaining traction in global markets despite growing concerns about national security.

While government bans remain in place across several countries, businesses are turning to DeepSeek’s models for low cost and firm performance, often ranking just behind OpenAI’s ChatGPT and Google’s Gemini in traffic and market share.

DeepSeek’s appeal lies in its efficiency. With advanced engineering techniques like its ‘mixture-of-experts’ system, the company has reduced computing costs by activating fewer parameters without a noticeable drop in performance.

Training costs have reportedly been as low as $5.6 million — a fraction of what rivals like Anthropic spend. As a result, DeepSeek’s models are now available across major platforms, including AWS, Azure, Google Cloud, and even open-source repositories like GitHub and Hugging Face.

However, the way DeepSeek is accessed matters. While companies can safely self-host the models in private environments, using the mobile app or website means sending data to Chinese servers, a key reason for widespread bans on public-sector use.

Individual consumers often lack the technical control enterprises enjoy, making their data more vulnerable to foreign access.

Despite the political tension, demand continues to grow. US firms are exploring DeepSeek as a cost-saving alternative, and its models are being deployed in industries from telecoms to finance.

Even Perplexity, an American AI firm, has used DeepSeek R1 to power a research tool hosted entirely on Western servers. DeepSeek’s open-source edge and rapid technical progress are helping it close the gap with much larger AI competitors — quietly but significantly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!