Microsoft has limited certain Chinese companies’ access to its early warning system for cybersecurity vulnerabilities following suspicions about their involvement in recent SharePoint hacking attempts.
The decision restricts the sharing of proof-of-concept code, which mimics genuine malicious software. While valuable for cybersecurity professionals strengthening their systems, the code can also be misused by hackers.
The restrictions follow Microsoft’s observation of exploitation attempts targeting SharePoint servers in July. Concerns arose that a member of the Microsoft Active Protections Program may have repurposed early warnings for offensive activity.
Microsoft maintains that it regularly reviews participants and suspends those violating contracts, including prohibitions on participating in cyber attacks.
Beijing has denied involvement in the hacking, while Microsoft has refrained from disclosing which companies were affected or details of the ongoing investigation.
Analysts note that balancing collaboration with international security partners and preventing information misuse remains a key challenge for global cybersecurity programmes.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.
Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.
Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Europol has warned that a reported $50,000 reward for information on two members of the Qilin ransomware group is fake. The message, circulating on Telegram, claimed the suspects, known as Haise and XORacle, coordinate affiliates and manage extortion operations.
Europol clarified that it does not operate a Telegram channel and that the message does not originate from its official accounts, which are active on Instagram, LinkedIn, X, Bluesky, YouTube, and Facebook.
Qilin, also known as Agenda, has been active since 2022 and, in 2025, listed over 400 victims on its leak website, including media and pharmaceutical companies.
Recent attacks, such as the one targeting Inotiv, demonstrate the group’s ongoing threat. Analysts note that cybercriminals often circulate false claims to undermine competitors, mislead affiliates, or sow distrust within rival gangs.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.
The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.
The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.
The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Imagine dreaming of your next holiday and feeling a rush of excitement. That emotion is when your attention is most engaged. Neuro-contextual advertising aims to meet you at such emotional peaks.
Neuro-contextual AI goes beyond page-level relevance. It interprets emotional signals of interest and intent in real time while preserving user privacy. It asks why users interact with content at a specific moment, not just what they view.
When ads align with emotion, interest and intention, engagement rises. A car ad may shift tone accordingly, action-fuelled visuals for thrill seekers and softer, nostalgic tones for someone browsing family stories.
Emotions shape memory and decisions. Emotionally intelligent advertising fosters connection, meaning and loyalty rather than just attention.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.
At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.
Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has introduced AI-powered translation tools for creators on Instagram and Facebook, allowing reels to be dubbed into other languages with automatic lip syncing.
The technology uses the creator’s voice instead of a generic substitute, ensuring tone and style remain natural while lip movements match the dubbed track.
The feature currently supports English-to-Spanish and Spanish-to-English, with more languages expected soon. On Facebook, it is limited to creators with at least 1,000 followers, while all public Instagram accounts can use it.
Viewers automatically see reels in their preferred language, although translations can be switched off in settings.
Through Meta Business Suite, creators can also upload up to 20 custom audio tracks per reel, offering manual control instead of relying only on automated translations. Audience insights segmented by language allow performance tracking across regions, helping creators expand their reach.
Meta has advised creators to prioritise face-to-camera reels with clear speech instead of noisy or overlapping dialogue.
The rollout follows a significant update to Meta’s Edits app, which added new editing tools such as real-time previews, silence-cutting and over 150 fresh fonts to improve the Reels production process.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.
A new Google Cloud survey shows that nearly nine in ten game developers have integrated AI agents into their workflow. These autonomous programs generate assets and interact with players in real time, adapting game worlds and NPCs to boost immersion.
Smaller studios are benefiting from AI, with nearly a third saying it lowers barriers to entry and allows them to compete with larger publishers. Developers report faster coding, testing, localisation, and onboarding, while larger companies face challenges adapting legacy systems to new AI tools.
AI-powered tools are also deployed to moderate online communities, guide tutorials, and respond dynamically to players.
While AI is praised as a productivity multiplier and creative copilot, some developers warn that a lack of standards can lead to errors and quality issues. Human creativity remains central, with many studios using AI to enhance gameplay rather than replace artistic and narrative input.
Developers stress the importance of maintaining unique styles and creative integrity while leveraging AI to unlock new experiences.
Industry experts highlight that gamers are receptive to AI when it deepens immersion and storytelling, but sceptical if it appears to shortcut the creative process. The survey shows that developers view AI as a long-term asset that can be used to reshape how games are made and experienced.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!