Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.
Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.
Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.
Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.
In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.
Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.
One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.
The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.
While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.
Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.
The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.
The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.
The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.
According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.
Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.
The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.
Anthropic added that the feature is experimental and may be adjusted based on user feedback.
The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is again exploring AI-powered robotics, reportedly working on prototypes including a tabletop assistant and lifelike upgrades to Siri. A home display may launch in 2026, with a robot device expected in 2027, though neither is confirmed for release.
One concept, codenamed J595 and the ‘Pixar Lamp,’ features a swivelling screen on a robotic arm that tracks user movement. The robot is a personal assistant that responds to conversations using facial recognition and motorised movement.
Other prototypes under evaluation include mobile bots and humanoid robots for industrial use.
The devices would run Apple’s new internal software platform, ‘Charismatic,’ designed for voice commands, personalised content, and smart home automation. Apple has not confirmed robotics, but CEO Tim Cook highlighted the company’s AI focus, hinting at upcoming innovations.
Experts note that domestic humanoid robots are still far from mainstream adoption. Gary Marcus, an AI expert and NYU professor, said Apple’s focus on privacy, security, and design suggests that future humanoid robots could benefit from its integrated hardware and software.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has updated GPT-5 to make its tone noticeably warmer and more engaging, without reverting to the overly flattering style some users criticised in GPT-4o. The change is rolling out, aiming to balance emotional resonance with substance.
CEO Sam Altman stated the adjustment directly responds to users finding GPT-5 too formal or robotic. The update is subtle yet visible, enhancing conversational warmth while avoiding sycophantic tendencies.
OpenAI also expands user control by offering three interaction modes, Auto, Fast, and Thinking, which adapt response style to user preference. These changes empower users to shape the tone and depth of their AI interactions.
Reacting to public frustration, OpenAI has reinstated GPT-4o (along with GPT-4.1, o3, and GPT-5 Thinking mini) for paid subscribers, while promising more customisation options in future updates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights the growing role of AI in K-12 communications, offering detailed guidance for ethical integration and effective school engagement.
Drawing on insights from 200 professionals across 37 states, the study reveals how AI tools boost efficiency while underscoring the need for stronger policies, transparency, and ongoing training.
Barbara M Hunter, APR, NSPRA executive director, explained that AI can enhance communication work but will never replace strategy, human judgement, relationships, and authentic school voices.
Key findings show that 91 percent of respondents already use AI, yet most districts still lack clear policies or disclosure practices for employee use.
The report recommends strengthening AI education, accelerating policy development, expanding the scope to cover staff, and building proactive strategies supported by human oversight and trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The ads, circulating primarily on TikTok, combine unnatural expressions with awkward speech patterns, triggering community outrage.
Fans on Reddit slammed the ads as ’embarrassing’ and akin to ‘cheap, lazy marketing,’ arguing that Nexon had bypassed genuine collaborators for synthetic substitutes, even though those weren’t subtle attempts.
Critics warned that these deepfake-like promotions undermine the trust and credibility of creators and raise ethical questions over likeness rights and authenticity in AI usage.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Singapore has launched a $27 billion initiative to boost AI readiness and protect jobs, as global tensions and automation reshape the workforce.
Prime Minister Lawrence Wong stressed that securing employment is key to national stability, particularly as geopolitical shifts and AI adoption accelerate.
IMF research warns Singapore’s skilled workers, especially women and youth, are among the most exposed to job disruption from AI technologies.
To address this, the government is expanding its SkillsFuture programme and rolling out local initiatives to connect citizens with evolving job markets.
The tech investment includes $5 billion for AI development and positions Singapore as a leader in digital transformation across Southeast Asia.
Social challenges remain, however, with rising inequality and risks to foreign workers highlighting the need for broader support systems and inclusive policy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!