Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps match ad emotion to content mood for better engagement

Imagine dreaming of your next holiday and feeling a rush of excitement. That emotion is when your attention is most engaged. Neuro-contextual advertising aims to meet you at such emotional peaks.

Neuro-contextual AI goes beyond page-level relevance. It interprets emotional signals of interest and intent in real time while preserving user privacy. It asks why users interact with content at a specific moment, not just what they view.

When ads align with emotion, interest and intention, engagement rises. A car ad may shift tone accordingly, action-fuelled visuals for thrill seekers and softer, nostalgic tones for someone browsing family stories.

Emotions shape memory and decisions. Emotionally intelligent advertising fosters connection, meaning and loyalty rather than just attention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk acknowledges value in ChatGPT-5’s modesty after public spat

Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.

Initially sparked by Musk accusing Apple of favouring ChatGPT in App Store rankings and Altman firing back with claims of manipulation on X, the feud has taken on new dimensions as AI itself seems to weigh in.

At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.

Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta brings AI translations with lip syncing to Instagram and Facebook

Meta has introduced AI-powered translation tools for creators on Instagram and Facebook, allowing reels to be dubbed into other languages with automatic lip syncing.

The technology uses the creator’s voice instead of a generic substitute, ensuring tone and style remain natural while lip movements match the dubbed track.

The feature currently supports English-to-Spanish and Spanish-to-English, with more languages expected soon. On Facebook, it is limited to creators with at least 1,000 followers, while all public Instagram accounts can use it.

Viewers automatically see reels in their preferred language, although translations can be switched off in settings.

Through Meta Business Suite, creators can also upload up to 20 custom audio tracks per reel, offering manual control instead of relying only on automated translations. Audience insights segmented by language allow performance tracking across regions, helping creators expand their reach.

Meta has advised creators to prioritise face-to-camera reels with clear speech instead of noisy or overlapping dialogue.

The rollout follows a significant update to Meta’s Edits app, which added new editing tools such as real-time previews, silence-cutting and over 150 fresh fonts to improve the Reels production process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta urged to ban child-like chatbots amid Brazil’s safety concerns

Brazil’s Attorney General (AGU) has formally requested Meta to remove AI-powered chatbots that simulate childlike profiles and engage in sexually explicit dialogue, citing concerns that they ‘promote the eroticisation of children.’

The demand was made via an ‘extrajudicial notice,’ recalling that platforms must remove illicit content without a court order, especially when it involves potential harm to minors.

Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.

While no direct sanctions were announced, the AGU emphasised that tech platforms must proactively manage harmful or inappropriate AI-generated content.

The move follows Brazil’s Supreme Court decision in June, which increased companies’ obligations to remove user-generated illicit content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents are transforming game development

A new Google Cloud survey shows that nearly nine in ten game developers have integrated AI agents into their workflow. These autonomous programs generate assets and interact with players in real time, adapting game worlds and NPCs to boost immersion.

Smaller studios are benefiting from AI, with nearly a third saying it lowers barriers to entry and allows them to compete with larger publishers. Developers report faster coding, testing, localisation, and onboarding, while larger companies face challenges adapting legacy systems to new AI tools.

AI-powered tools are also deployed to moderate online communities, guide tutorials, and respond dynamically to players.

While AI is praised as a productivity multiplier and creative copilot, some developers warn that a lack of standards can lead to errors and quality issues. Human creativity remains central, with many studios using AI to enhance gameplay rather than replace artistic and narrative input.

Developers stress the importance of maintaining unique styles and creative integrity while leveraging AI to unlock new experiences.

Industry experts highlight that gamers are receptive to AI when it deepens immersion and storytelling, but sceptical if it appears to shortcut the creative process. The survey shows that developers view AI as a long-term asset that can be used to reshape how games are made and experienced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI-generated responses flooding research platforms

Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.

Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.

Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.

In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.

Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexon investigates AI-generated TikTok ads for The First Descendant

Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.

One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.

The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.

While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.

Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.

The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.

The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamescom showcases EU support for cultural and digital innovation

The European Commission will convene video game professionals in Cologne for the third consecutive year on August 20 and 21. The visit aims to follow developments in the industry, present the future EU budget, and outline opportunities under the upcoming AgoraEU programme.

EU Officials will also discuss AI adoption, new investment opportunities, and ways to protect minors in gaming. Renate Nikolay, Deputy Director-General of DG CONNECT, will deliver a keynote speech and join a panel titled ‘Investment in games – is it finally happening?’.

The European Commission highlights the role of gaming in Europe’s cultural diversity and innovation. Creative Europe MEDIA has already supported nearly 180 projects since 2021. At Gamescom, its booth will feature 79 companies from 24 countries, offering fresh networking opportunities to video game professionals.

The engagement comes just before the release of the second edition of the ‘European Media Industry Outlook’ report. The updated study will provide deeper insights into consumer behaviour and market trends, with a dedicated focus on the video games sector.

Gamescom remains the world’s largest gaming event, with 1,500 exhibitors from 72 nations in 2025. The event celebrates creative and technological achievements, highlighting the industry’s growing importance for Europe’s competitiveness and digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!