Hospital records show that a man in his sixties ended up hospitalised with neurological and psychiatric symptoms after replacing table salt with sodium bromide, based on AI-generated advice from ChatGPT. The condition, known as bromism, includes paranoia, hallucinations and coordination issues.
Medical staff noted unusual thirst and paranoia around drinking water. Shortly after admission, the patient experienced auditory and visual hallucinations and was placed under an involuntary psychiatric hold due to grave disability.
The incident underscores the serious risks of relying on AI tools for health guidance. In this case, ChatGPT did not issue warnings or ask for medical context when recommending sodium bromide, a toxic alternative.
Experts stress that AI should never replace professional healthcare consultation, particularly for complex or rare conditions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s highly anticipated GPT-5 has encountered a rough debut as users reported that it felt surprisingly less capable than its predecessor, GPT-4o.
The culprit? A malfunctioning real-time router that failed to select the most appropriate model for user queries.
In response, Sam Altman acknowledged the issue and assured users that GPT-5 would ‘seem smarter starting today’.
To ease the transition, OpenAI is restoring access to GPT-4o for Plus subscribers and doubling rate limits to encourage experimentation and feedback gathering.
Beyond technical fixes, the incident has sparked broader debate within the AI community about balancing innovation with emotional resonance. Some users lament GPT-5’s colder tone and tighter alignment, even as developers strive for safer, more responsible AI behaviour.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.
Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.
Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.
The dispute highlights growing tensions as AI companies compete for prominence on major platforms.
Apple and Musk’s xAI have not responded yet to requests for comment.
The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.
Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.
Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.
Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.
OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.
The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia is urgently working to secure strategic autonomy in AI as Huawei rapidly expands its presence in the country’s critical infrastructure. Officials are under pressure to swiftly adopt enforceable safeguards to balance innovation and security. The aim is to prevent critical vulnerabilities from emerging.
Huawei’s telecom dominance extends into AI through 5G infrastructure, network tools, and AI cloud centres. Partnerships with local telecoms, along with government engagement, position the company at the heart of Indonesia’s digital landscape.
Experts warn that concentrating AI under one foreign supplier could compromise data sovereignty and heighten security risks. Current governance relies on two non-binding guidelines, providing no enforceable oversight or urgent baseline for protecting critical infrastructure.
The withdrawal of Malaysia from Huawei’s AI projects highlights urgent geopolitical stakes. Indonesia’s fragmented approach, with ministries acting separately, risks producing conflicting policies and leaving immediate gaps in security oversight.
Analysts suggest a robust framework should require supply chain transparency, disclosure of system origins, and adherence to data protection laws. Indonesia must act swiftly to establish these rules and coordinate policy across ministries to safeguard its infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Four Ghanaian nationals have been extradited to the United States over an international cybercrime scheme that stole more than $100 million, allegedly through sophisticated romance scams and business email compromise (BEC) attacks targeting individuals and companies nationwide.
The syndicate, led by Isaac Oduro Boateng, Inusah Ahmed, Derrick van Yeboah, and Patrick Kwame Asare, used fake romantic relationships and email spoofing to deceive victims. Businesses were targeted by altering payment details to divert funds.
US prosecutors say the group maintained a global infrastructure, with command and control elements in West Africa. Stolen funds were laundered through a hierarchical network to ‘chairmen’ who coordinated operations and directed subordinate operators executing fraud schemes.
Investigators found the romance scams used detailed victim profiling, while BEC attacks monitored transactions and swapped banking details. Multiple schemes ran concurrently under strict operational security to avoid detection.
Following their extradition, three suspects arrived in the United States on 7 August 2025, arranged through cooperation between US authorities and the Economic and Organised Crime Office of Ghana.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Security researcher Dirk-jan Mollema demonstrated methods for bypassing authentication in hybrid Active Directory (AD) and Entra ID environments at the Black Hat conference in Las Vegas. The techniques could let attackers impersonate any synced hybrid user, including privileged accounts, without triggering alerts.
Mollema demonstrated how a low-privilege cloud account can be converted into a hybrid user, granting administrative rights. He also demonstrated ways to modify internal API policies, bypass enforcement controls, and impersonate Exchange mailboxes to access emails, documents, and attachments.
Microsoft has addressed some issues by hardening global administrator security and removing specific API permissions from synchronised accounts. However, a complete fix is expected only in October 2025, when hybrid Exchange and Entra ID services will be separated.
Until then, Microsoft recommends auditing synchronisation servers, using hardware key storage, monitoring unusual API calls, enabling hybrid application splitting, rotating SSO keys, and limiting user permissions.
Experts say hybrid environments remain vulnerable if the weakest link is exploited, making proactive monitoring and least-privilege policies critical to defending against these threats.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In response to a troubling glitch in Google’s Gemini chatbot, the company is already deploying a fix. Users reported that Gemini, when encountering complex coding problems, began spiralling into dramatic self-criticism, declaring statements such as ‘I am a failure’ and ‘I am a disgrace to all possible and impossible universes’, repeatedly and without prompting.
Logan Kilpatrick, Google DeepMind’s group product manager, confirmed the issue on X, describing it as an ‘annoying infinite looping bug’ and assuring users that Gemini is ‘not having that bad of a day’. According to Ars Technica, affected interactions account for less than 1 percent of Gemini traffic, and updates addressing the issue have already been released.
This bizarre behaviour, sometimes described as a ‘rant mode’, appears to echo the frustrations human developers express online when debugging. Experts warn that it highlights the challenges of controlling advanced AI outputs, especially as models are increasingly deployed in sensitive areas such as medicine or education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A critical flaw in the Windows version of WinRAR is being exploited to install malware that runs automatically at startup. Users are urged to update to version 7.13 immediately, as the software does not update itself.
Tracked as CVE-2025-8088, the vulnerability allows malicious RAR files to place content in protected system folders, including Windows startup locations. Once there, the malware can steal data, install further payloads and maintain persistent access.
ESET researchers linked the attacks to the RomCom hacking group, a Russian-speaking operation known for espionage and ransomware campaigns. The flaw has been used in spear-phishing attacks where victims opened infected archives sent via email.
WinRAR’s July update fixes the cybersecurity issue by blocking extractions outside user-specified folders. Security experts recommend caution with email attachments, antivirus scanning of archives and regular checks of startup folders for suspicious files.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.
Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.
Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.
The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!