Study finds AI-generated responses flooding research platforms

Online questionnaires are being increasingly swamped by AI-generated responses, raising concerns that a vital data source for researchers is becoming polluted. Platforms like Prolific, which pay participants to answer questions, are widely used in behavioural studies.

Researchers at the Max Planck Institute noticed suspicious patterns in their work and began investigating. They found that nearly half of the respondents copied and pasted answers, strongly suggesting that many were outsourcing tasks to AI chatbots.

Analysis showed clear giveaways, including overly verbose and distinctly non-human language. The researchers concluded that a substantial proportion of behavioural studies may already be compromised by chatbot-generated content.

In follow-up tests, they set traps to detect AI use, including invisible text instructions and restrictions on copy-paste. The measures caught a further share of participants, highlighting the scale of the challenge facing online research platforms.

Experts say the responsibility lies with both researchers and platforms. Stronger verification methods and tighter controls are needed for online behavioural research to remain credible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexon investigates AI-generated TikTok ads for The First Descendant

Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.

One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.

The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.

While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.

Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.

The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.

The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamescom showcases EU support for cultural and digital innovation

The European Commission will convene video game professionals in Cologne for the third consecutive year on August 20 and 21. The visit aims to follow developments in the industry, present the future EU budget, and outline opportunities under the upcoming AgoraEU programme.

EU Officials will also discuss AI adoption, new investment opportunities, and ways to protect minors in gaming. Renate Nikolay, Deputy Director-General of DG CONNECT, will deliver a keynote speech and join a panel titled ‘Investment in games – is it finally happening?’.

The European Commission highlights the role of gaming in Europe’s cultural diversity and innovation. Creative Europe MEDIA has already supported nearly 180 projects since 2021. At Gamescom, its booth will feature 79 companies from 24 countries, offering fresh networking opportunities to video game professionals.

The engagement comes just before the release of the second edition of the ‘European Media Industry Outlook’ report. The updated study will provide deeper insights into consumer behaviour and market trends, with a dedicated focus on the video games sector.

Gamescom remains the world’s largest gaming event, with 1,500 exhibitors from 72 nations in 2025. The event celebrates creative and technological achievements, highlighting the industry’s growing importance for Europe’s competitiveness and digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp trials AI-powered Writing Help for personalised messaging

WhatsApp is testing a new AI feature for iOS users that provides real-time writing assistance.

Known as ‘Writing Help’, the tool suggests alternative phrasings, adjusts tone, and enhances clarity, with all processing handled on-device to safeguard privacy.

The feature allows users to select professional, friendly, or concise tones before the AI generates suitable rewordings while keeping the original meaning. According to reports, the tool is available only to a small group of beta testers through TestFlight, with no confirmed release date.

WhatsApp says it uses Meta’s Private Processing technology to ensure sensitive data never leaves the device, mirroring privacy-first approaches like Apple’s Writing Tools.

Industry watchers suggest the new tool could give WhatsApp an edge over rivals such as Telegram and Signal, which have not yet introduced generative AI writing aids.

Analysts also see potential for integration with other Meta platforms, although challenges remain in ensuring accurate, unbiased results across different languages.

Writing Help could streamline business communication by improving grammar, structure, and tone accuracy if successful. While some users have praised its seamless integration, others warn that heavy reliance on AI could undermine authenticity in digital conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The First Descendant faces backlash over AI-generated streamer ads

Nexon’s new promotional ads for their looter-shooter The First Descendant have ignited controversy after featuring AI-generated avatars that closely mimic real content creators, one resembling streamer DanieltheDemon.

The ads, circulating primarily on TikTok, combine unnatural expressions with awkward speech patterns, triggering community outrage.

Fans on Reddit slammed the ads as ’embarrassing’ and akin to ‘cheap, lazy marketing,’ arguing that Nexon had bypassed genuine collaborators for synthetic substitutes, even though those weren’t subtle attempts.

Critics warned that these deepfake-like promotions undermine the trust and credibility of creators and raise ethical questions over likeness rights and authenticity in AI usage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI upskilling at heart of Singapore’s new job strategy

Singapore has launched a $27 billion initiative to boost AI readiness and protect jobs, as global tensions and automation reshape the workforce.

Prime Minister Lawrence Wong stressed that securing employment is key to national stability, particularly as geopolitical shifts and AI adoption accelerate.

IMF research warns Singapore’s skilled workers, especially women and youth, are among the most exposed to job disruption from AI technologies.

To address this, the government is expanding its SkillsFuture programme and rolling out local initiatives to connect citizens with evolving job markets.

The tech investment includes $5 billion for AI development and positions Singapore as a leader in digital transformation across Southeast Asia.

Social challenges remain, however, with rising inequality and risks to foreign workers highlighting the need for broader support systems and inclusive policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!