Bo Hines leaves White House crypto role

Bo Hines, executive director of the White House Crypto Council, has announced his departure to return to the private sector. Appointed in December 2024, Hines thanked the crypto community, calling his role ‘the honour of a lifetime’ and pledging ongoing support.

The council, formed to shape US digital asset policy, released a regulatory action plan in July. Despite progress, critics argued it failed to implement a strategic Bitcoin reserve. Deputy director Patrick Witt is expected to succeed Hines, though no official appointment has been made.

Hines strongly backed expanding the government’s Bitcoin holdings through budget-neutral strategies, which is in line with Trump’s January executive order that created a national crypto stockpile.

He previously suggested revaluing US gold reserves, which are priced far below market value. Part of the gains could then be converted into Bitcoin without impacting the federal budget.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sam Altman praises rapid AI adoption in India

OpenAI’s new GPT‑5 model has been unveiled, and the company offers it free to all users. Three model versions, gpt‑5, gpt‑5‑mini and gpt‑5‑nano, offer developers a balance of performance, cost and latency.

CEO Sam Altman applauded India’s rapid AI adoption and hinted that India, currently OpenAI’s second‑largest market, may soon become the largest. A visit to India is planned for September.

The new GPT‑5 achieves a level of expertise akin to a PhD‑level professional and is described as a meaningful step towards AGI. OpenAI intends to make the model notably accessible through its free tier.

Head of ChatGPT Nick Turley noted that GPT‑5 significantly enhances understanding across more than twelve Indian languages, reinforcing India as a key market for localisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

James Cameron warns AI could spark a real-life Terminator apocalypse

James Cameron, the director behind the iconic Terminator franchise, has warned that the real-world use of AI could lead to a catastrophic scenario similar to the series’ apocalyptic Judgement Day.

While Cameron is writing the script for Terminator 7, he has expressed concern that mixing AI with weapons systems, including nuclear defence, poses grave risks.

He explained that the rapid pace of decision-making in such systems might require superintelligent AI to respond quickly. Yet, human error has already brought the world close to disaster in the past.

Cameron also highlighted three major existential threats humanity faces: climate change, nuclear weapons, and superintelligence. He suggested that AI might ultimately offer a solution rather than just a danger, reflecting a nuanced view beyond simple dystopian fears.

His evolving perspective mirrors the Terminator franchise itself, which has long balanced the destructive potential of AI with more hopeful portrayals of technology as a possible saviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 launches with ‘PhD-level performance’

OpenAI has unveiled GPT-5, the latest generation of its widely used ChatGPT tool, offering what CEO Sam Altman described as a ‘huge improvement’ in capability.

Now free to all users, the model builds on previous versions but stops short of the human-like reasoning associated with accurate artificial general intelligence.

Altman compared the leap in performance to ‘talking to a PhD-level expert’ instead of a student.

While GPT-5 does not learn continuously from new experiences, it is designed to excel in coding, writing, healthcare and other specialist areas.

Industry observers say the release underscores the rapid acceleration in AI, with rivals such as Google, Meta, Microsoft, Amazon, and Elon Musk’s xAI investing heavily in the race. Chinese startup DeepSeek has also drawn attention for producing powerful models using less costly chips.

OpenAI has emphasised GPT-5’s safety features, with its research team training the system to avoid deception and prevent harmful outputs.

Alongside the flagship release, the company launched two open-weight models that can be freely downloaded and modified, a move seen as both a nod to its nonprofit origins and a challenge to competitors’ open-source offerings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fans divided on Rod Stewart’s Ozzy Osbourne concert tribute

Rod Stewart is under fire for using AI-generated visuals in a tribute to Ozzy Osbourne during a recent US concert. The video showed a digitally recreated Osbourne taking selfies with late music icons in heaven.

The tribute, set to Stewart’s 1988 track Forever Young, was played at his Alpharetta performance. Artists like Whitney Houston, Kurt Cobain, Freddie Mercury, and Tupac Shakur featured in the AI montage.

While some called the display disrespectful and tasteless, others viewed it as a heartfelt tribute to legendary figures. Reactions online ranged from outrage to admiration.

Osbourne, who passed away last month at age 76, was honoured with global tributes, including flowers laid at Birmingham’s Black Sabbath Bench by fans and family.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

News Corp CEO warns AI could ‘vandalise’ creativity and IP rights

News Corp chief executive Robert Thomson has warned that AI could damage creativity by undermining intellectual property rights.

At the company’s full-year results briefing in New York, he described the AI era as a historic turning point. He called for stronger protections to preserve America’s ‘comparative advantage in creativity’.

Thomson said allowing AI systems to consume and profit from copyrighted works without permission was akin to ‘vandalising virtuosity’.

He cited Donald Trump’s The Art of the Deal, published by News Corp’s book division, questioning whether it should be used to train AI that might undermine book sales. Despite the criticism, the company has rolled out its AI newsroom tools, NewsGPT and Story Cutter.

News Corp reported a two percent revenue rise to US$8.5 billion ($A13.1 billion), with net income from continuing operations climbing 71 percent to US$648 million.

Growth in the Dow Jones and REA Group segments offset news media subscriptions and advertising declines.

Digital subscribers fell across several mastheads, although The Times and The Sunday Times saw gains. Profitability in news media rose 15 percent, aided by editorial efficiencies and cost-cutting measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Science removes concern from Microsoft quantum paper

The journal Science will replace an editorial expression of concern (EEoC) on a 2020 Microsoft quantum computing paper with a correction. The update notes incomplete explanations of device tuning and partial data disclosure, but no misconduct.

Co-author Charles Marcus welcomed the decision but lamented the four-year dispute.

Sergey Frolov, who raised concerns about data selection, disagrees with the correction and believes the paper should be retracted. The debate centres on Microsoft’s claims about topological superconductors using Majorana particles, a critical step for quantum computing.

Several Microsoft-backed papers on Majoranas have faced scrutiny, including retractions. Critics accuse Microsoft of cherry-picking data, while supporters stress the research’s complexity and pioneering nature.

The controversy reveals challenges in peer review and verifying claims in a competitive field.

Microsoft defends the integrity of its research and values open scientific debate. Critics warn that selective reporting risks misleading the community. The dispute highlights the difficulty of confirming breakthrough quantum computing claims in an emerging industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Zuckerberg says future AI glasses will give wearers a cognitive edge

Mark Zuckerberg framed smart glasses as the future of human–AI interaction during Meta’s Q2 2025 earnings call, saying anyone without such a device may be at a cognitive disadvantage compared to those using them.

He described the eyewear as the ideal way for AI to observe users visually and aurally, and to communicate information seamlessly during daily life.

Company leaders view smart eyewear such as Ray‑Ban Meta and Oakley Meta as early steps toward this vision, noting sales have more than tripled year-over-year.

Reality Labs, Meta’s AR/AI hardware unit, has accumulated nearly $70 billion in losses but continues investing in the form factor. Zuckerberg likened AI glasses to contact lenses for cognition, which is essential rather than optional.

While Meta remains committed to wearable AI, critics flag privacy and social risks around persistent camera-equipped glasses.

The strategy reflects a bet that wearable tech will reshape daily computing and usher in what Zuckerberg calls ‘personal superintelligence’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta forms AI powerhouse by appointing Shengjia Zhao as chief scientist

Meta has appointed former OpenAI researcher Shengjia Zhao as Chief Scientist of its newly formed AI division, Meta Superintelligence Labs (MSL).

Zhao, known for his pivotal role in developing ChatGPT, GPT-4, and OpenAI’s first reasoning model, o1, will lead MSL’s research agenda under Alexandr Wang, the former CEO of Scale AI.

Mark Zuckerberg confirmed Zhao’s appointment, saying he had been leading scientific efforts from the start and co-founded the lab.

Meta has aggressively recruited top AI talent to build out MSL, including senior researchers from OpenAI, DeepMind, Apple, Anthropic, and its FAIR lab. Zhao’s presence helps balance the leadership team, as Wang lacks a formal research background.

Meta has reportedly offered massive compensation packages to lure experts, with Zuckerberg even contacting candidates personally and hosting them at his Lake Tahoe estate. MSL will focus on frontier AI, especially reasoning models, in which Meta currently trails competitors.

By 2026, MSL will gain access to Meta’s massive 1-gigawatt Prometheus cloud cluster in Ohio, designed to power large-scale AI training.

The investment and Meta’s parallel FAIR lab, led by Yann LeCun, signal the company’s multi-pronged strategy to catch up with OpenAI and Google in advanced AI research.

The collaboration dynamics between MSL, FAIR, and Meta’s generative AI unit remain unclear, but the company now boasts one of the strongest AI research teams in the industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!