Trust in human doctors remains despite AI advancements

OpenAI CEO Sam Altman has stated that AI, especially ChatGPT, now surpasses many doctors in diagnosing illnesses. However, he pointed out that individuals still prefer human doctors because of the trust and emotional connection they provide.

Altman also expressed concerns about the potential misuse of AI, such as using voice cloning for fraud and identity theft. He emphasised the need for stronger privacy protections for sensitive conversations with AI tools like ChatGPT, noting that current standards are inadequate and should align with those for therapists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DOJ seizes $2.3 million Bitcoin from Chaos ransomware

The US Department of Justice has moved to seize over $2.3 million in Bitcoin tied to a member of the Chaos ransomware group. The funds, taken from a wallet linked to the individual known as ‘Hors’, are alleged to be proceeds of extortion and money laundering.

Chaos operates as a ransomware-as-a-service group, renting its malware to affiliates targeting Windows, Linux, and NAS systems. The group has been active since early 2025 and is known for encrypting victims’ data while demanding crypto payments under threat of public leaks.

US Federal agents accessed the wallet in April using a recovery seed phrase from an older Electrum platform and transferred the assets to a government-controlled address. The DOJ said the operation demonstrates growing success in disrupting ransomware-related crypto flows.

Despite the seizure, challenges remain as such groups evolve their tactics and benefit from the relative anonymity of decentralised platforms. Authorities stress that continued cross-agency cooperation and advances in blockchain forensics are essential in combating future threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU clears Microsoft deal after privacy changes

The European Data Protection Supervisor (EDPS) has ended its enforcement action against the European Commission over its use of Microsoft, following improvements to data protection practices. The decision came after the Commission revised its contract with Microsoft to improve privacy standards.

Under the updated terms, Microsoft must clarify the reasons for data transfers outside the European Economic Area and name the recipients. Transfers are only allowed to countries with EU-recognised protections or in public interest cases.

Microsoft must also inform the Commission if a foreign government requests access to EU data, unless the request comes from within the EU or a country with equivalent safeguards. The EDPS urged other EU institutions to adopt similar contractual protections if using Microsoft 365.

Despite the EDPS’ clearance, the Commission remains concerned about relying too heavily on a non-EU tech provider for essential digital services. It continues to support the current EU-US data adequacy deal, though recent political changes in the US have cast doubt on its long-term stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants back Trump’s AI deregulation plan amid public concern over societal impacts

Donald Trump recently hosted an AI summit in Washington, titled ‘Winning the AI Race,’ geared towards a deregulated atmosphere for AI innovation. Key figures from the tech industry, including Nvidia’s CEO Jensen Huang and Palantir’s CTO Shyam Sankar, attended the event.

Co-hosted by the Hill and Valley Forum and the Silicon Valley All-in Podcast, the summit was a platform for Trump to introduce his ‘AI Action Plan‘, comprised of three executive orders focusing on deregulation. Trump’s objective is to dismantle regulatory restrictions he perceives as obstacles to innovation, aiming to re-establish the US as a leader in AI exportation globally.

The executive orders announced target the elimination of ‘ideological dogmas such as diversity, equity, and inclusion (DEI)’ in AI models developed by federally funded companies. Additionally, one order promotes exporting US-developed AI technologies internationally, while another seeks to lessen environmental restrictions and speed up approvals for energy-intensive data centres.

These measures are seen as reversing the Biden administration’s policies, which stressed the importance of safety and security in AI development. Technology giants Apple, Meta, Amazon, and Alphabet have shown significant support for Trump’s initiatives, contributing to his inauguration fund and engaging with him at his Mar-a-Lago estate. Leaders like OpenAI’s Sam Altman and Nvidia’s Jensen Huang have also pledged substantial investments in US AI infrastructure.

Despite this backing, over 100 groups, including labour, environmental, civil rights, and academic organisations, have voiced their opposition through a ‘People’s AI action plan’. These groups warn of the potential risks of unregulated AI, which they fear could undermine civil liberties, equality, and environmental safeguards.

They argue that public welfare should not be compromised for corporate gains, highlighting the dangers of allowing tech giants to dominate policy-making. That discourse illustrates the divide between industry aspirations and societal consequences.

The tech industry’s influence on AI legislation through lobbying is noteworthy, with a report from Issue One indicating that eight of the largest tech companies spent a collective $36 million on lobbying in 2025 alone. Meta led with $13.8 million, employing 86 lobbyists, while Nvidia and OpenAI saw significant increases in their expenditure compared to previous years. The substantial financial outlay reflects the industry’s vested interest in shaping regulatory frameworks to favour business interests, igniting a debate over the ethical responsibilities of unchecked AI progress.

As tech companies and pro-business entities laud Trump’s deregulation efforts, concerns persist over the societal impacts of such policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China issues action plan for global AI governance and proposes global AI cooperation organisation

At the 2025 World AI Conference in Shanghai, Chinese Premier Li Qiang urged the international community to prioritise joint efforts in governing AI, making reference to a need to establish a global framework and set of rules widely accepted by the global community. He unveiled a proposal by the Chinese government to create a global AI cooperation organisation to foster international collaboration, innovation, and inclusivity in AI across nations.

China attaches great importance to global AI governance, and has been actively promoting multilateral and bilateral cooperation with a willingness to offer more Chinese solutions‘.

An Action Plan for AI Global Governance was also presented at the conference. The plan outlines, in its introduction, a call for ‘all stakeholders to take concrete and effective actions based on the principles of serving the public good, respecting sovereignty, development orientation, safety and controllability, equity and inclusiveness, and openness and cooperation, to jointly advance the global development and governance of AI’.

The document includes 13 points related to key areas of international AI cooperation, including promoting inclusive infrastructure development, fostering open innovation ecosystems, ensuring high-quality data supply, and advancing sustainability through green AI practices. It also calls for consensus-building around technical standards, advancing international cooperation on AI safety governance, and supporting countries – especially those in the Global South – in ‘developing AI technologies and services suited to their national conditions’.

Notably, the plan indicates China’s support for multilateralism when it comes to the governance of AI, calling for an active implementation of commitments made by UN member states in the Pact for the Future and the Global Digital Compact, and expressing support for the establishment of the International AI Scientific Panel and a Global Dialogue on AI Governance (whose terms of reference are currently negotiated by UN member states in New York).

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US senator urges Musk to block Starlink use by Southeast Asian criminal networks

US Senator Maggie Hassan has called on SpaceX CEO Elon Musk to take immediate action against transnational criminal groups in Southeast Asia, which are allegedly using Starlink satellite internet to perpetrate massive online fraud schemes targeting American citizens.

In a letter seen by Reuters, the senator highlighted the growing role of Starlink in enabling so-called ‘scam compounds’ operated by criminal syndicates across Myanmar, Thailand, Cambodia, and Laos.

According to the US Treasury’s Financial Crimes Enforcement Network, the fraud networks have collectively cost Americans billions of dollars.

Senator Hassan emphasised that although SpaceX’s service rules allow for termination of access in cases of fraudulent activity, Starlink appears to remain active in regions where these scams flourish. She urged Musk to uphold SpaceX’s stated standards and take responsibility for cutting off illicit use of the service.

The scam compounds in question are more than just virtual hubs; reportedly, they are the sites of forced labour and human trafficking. Reports, including those from the UN, detail how hundreds of thousands of people have been trafficked into these centres, where they are coerced into operating elaborate online fraud schemes. These often target victims in the US and around the world through phishing messages, fake investment offers, and digital extortion.

The region has taken some steps to curb these operations. Since February, Thailand has actively disrupted resources such as electricity and internet to areas along its border with Myanmar, notably Myawaddy, where many scam centres are based. However, satellite services like Starlink can bypass these traditional infrastructure shutdowns, enabling fraud operations to persist despite regional crackdowns.

The criminal networks, many of which have roots in China, have also captured international attention due to high-profile cases. One such case was the January abduction of Chinese actor Wang Xing, who was kidnapped after arriving in Thailand and later rescued across the border in Myanmar by Thai authorities.

The incident further exposed these networks’ dangerous and organised nature, prompting broader calls for transnational cooperation and tech-sector accountability.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT Agent brings autonomous task handling to OpenAI users

OpenAI has launched the ChatGPT Agent, a feature that transforms ChatGPT from a conversational tool into a proactive digital assistant capable of performing complex, real-world tasks.

By activating ‘agent mode,’ users can instruct ChatGPT to handle activities such as booking restaurant reservations, ordering groceries, managing emails and creating presentations.

The Agent operates within a virtual browser environment, allowing it to interact with websites, fill out forms, and execute multi-step tasks autonomously.

However, this advancement builds upon OpenAI’s previous tool, Operator, which enabled AI-driven task execution. However, the ChatGPT Agent offers enhanced capabilities, including integration with third-party services like Gmail and Google Drive, allowing it to manage emails and documents seamlessly.

Users can monitor the Agent’s actions in real-time and intervene when necessary, particularly during tasks involving sensitive information.

While the ChatGPT Agent offers significant convenience, it also questions data privacy and security. OpenAI has implemented safety measures, such as requiring explicit user consent for sensitive actions and training the Agent to refuse risky or malicious requests.

Despite these precautions, concerns persist regarding handling personal information and access to third-party services. Users must review the Agent’s permissions and settings to ensure their data remains secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBTech’s Walker S2 marks a leap towards uninterrupted robotic work

The paradigm of robotic autonomy is undergoing a profound transformation with the advent of UBTech’s new humanoid, the Walker S2. Traditionally, robots have been tethered to human assistance for power, requiring manual plugging in or lengthy recharges.

UBTech, a pioneering robotics company, is now dismantling these limitations with a groundbreaking feature in the Walker S2: the ability to swap its battery autonomously. The innovation promises to reshape the landscape of factory work and potentially many other industries, enabling near-continuous, 24/7 operation without human intervention.

The core of this advancement lies in the Walker S2’s sophisticated self-charging mechanism. When a battery begins to deplete, the robot does not power down. Instead, it intelligently navigates to a strategically placed battery swap station.

Once positioned, the robot executes a precise sequence of movements: it twists its torso, deploys built-in tools on its arms to unfasten and remove the drained battery from its back cavity, places it into an empty bay on the swap station, and then expertly retrieves a fresh, fully charged module.

The new battery is then securely plugged into one of its dual battery bays. The process is remarkably swift, taking approximately three minutes, allowing the robot to return to its tasks almost immediately.

The hot-swappable system mirrors the convenience of advanced electric vehicle technology, but its application to humanoid robotics unlocks unprecedented operational efficiency. Standing at 5 feet, 3 inches (approximately 160 cm) tall and weighing 95 pounds (about 43 kg), the Walker S2 is designed to integrate seamlessly into environments built for humans.

It has two 48-volt lithium batteries, ensuring a continuous power supply during the brief swapping procedure. While one battery powers the robot’s ongoing operations, the other can be exchanged.

Each battery provides approximately two hours of operation while walking or up to four hours when the robot stands still and performs tasks. The battery swap stations are not merely power hubs; they also meticulously monitor the health of each battery.

Should a battery show signs of degradation, a technician can be alerted to a timely replacement, further optimising the robot’s longevity and performance.

UBTech claims the Walker S2 is not a mere laboratory prototype but a robust solution engineered for real-world industrial deployment. Extensive testing has been conducted in the highly demanding environments of car factories operated by major Chinese electric vehicle manufacturers, including BYD, Nio, and Zeekr.

The trials validate the robot’s ability to operate effectively in dynamic production lines. The Walker S2 incorporates advanced vision systems, allowing it to detect battery levels and identify fully charged units, indicated by a green light on the stacked battery packs.

The robot autonomously reads the visual cues, ensuring precise selection and connection via a simple USB-style connector. Furthermore, the robot features a display face, enabling it to communicate its operational status to human workers, fostering a collaborative and transparent work environment. For safety, a prominent emergency stop button is also integrated.

China’s strategic investment in robotics is a driving force behind such innovations. Shenzhen, UBTech’s home base, is a thriving hub for robotics, boasting over 1,600 companies in the sector.

The nation’s broader push towards automation, part of its ‘Made in China 2025’ strategy, is a clear statement of global competitiveness, with China betting on AI and robotics to spearhead the next manufacturing era.

The coordinated industrial policy has led to China becoming the world’s largest market for industrial robots and a significant innovator in the field. The implications of robots like the Walker S2, built for non-stop operation, extend far beyond traditional factory floors.

Their ability to manage physical tasks continuously could redefine work in various sectors. Industries such as logistics, with vast warehouses requiring constant material handling, or airports, where baggage and cargo movement is ceaseless, benefit immensely.

Hospitals could also see these humanoids assisting with logistical duties, allowing human staff to concentrate on direct patient care. For businesses, the promise of 24/7 automation translates directly into increased output without additional human resources, ensuring operations move seamlessly day and night.

The Walker S2 exemplifies how advanced automation rapidly moves beyond research labs into practical, demanding workplaces. With its autonomous battery-swapping capability, humanoid robots are poised to work extended hours that far exceed human capacity.

The robots do not require coffee breaks or need sleep; they are designed for relentless productivity, marking a significant step towards a future where machines play an even more integral role in daily industrial and societal functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels new wave of global security breaches

Global corporations are under growing threat from increasingly sophisticated cyber attacks as AI tools boost the capabilities of malicious actors.

Allianz Life recently confirmed a breach affecting most of its 1.4 million North American customers, adding to a string of high-profile incidents this year.

Microsoft is also contending with the aftermath of a wide-scale intrusion, as attackers continue to exploit AI-driven methods to bypass traditional defences.

Cybersecurity firm DeepStrike reports that over 560,000 new malware samples are detected daily, underscoring the scale of the threat.

Each month in 2025 has brought fresh incidents. January saw breaches at the UN and Hewlett-Packard, while crypto lender zkLend lost $9.5 million to hackers in February.

March was marked by a significant attack on Elon Musk’s X platform, and Oracle lost six million data records.

April and May were particularly damaging for retailers and financial services. M&S, Harrods, and Coinbase were among the prominent names hit, with the latter facing a $20 million ransom demand. In June, luxury brands and media companies, including Cartier and the Washington Post, were also targeted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!